Server side rendering of javascript and SEO

There has been a lot of fad around the SEO capabilities of a public facing full blown java script application and many popular frameworks including Angular and React allowing an initial state of HTML to be rendered to the client, before all client interactions starting to appear.

I think we are looking at the wrong end of the stick.

IMO, Google and Facebook all should stop allowing people to use these initial rendering capabilities and try to focus more on improving the other important capabilities of their java script frameworks. These frameworks have been created for a purpose, and that is to make life of a developer easy.

What are we trying to fundamentally cover up? If a search engine can’t index a java script run page, then that is the problem of the search engine. That indicates that the search engine has not come off age.

 If a user is not able to find useful information through a particular search engine, they will flock to a different search engine which can provide those results.

History has shown that and I am sure, it is going to repeat.

When an http server can execute a full blown java script website and push an equivalent HTML page complete with compatible java script objects which can work cross-browser, then I can say  java script frameworks  have reached it’s goal. Until then , nah.

I am not ignoring the performance, considerations of  or UI loading times, but my  point is just focusing on the SEO.

 

 

 

 

 

 

 

 

 

F# trying FSharpKoans

And now I am back to F# again 🙂

I started learning F# by plural sight videos.

Two of which I found really helpful , which at least made me get enthusiastic about opening Visual studio and doing some code samples were

F# Jumpstart  and F# Functional Data Structures by Kit Eason

rest all seemed too high level for me at that moment , mainly because I am just a beginner.

Then I bought a book F# for C# developers hoping that the good old way of reading and understanding might help. I am in no way expert to judge this book, but for me the content was very dull and so it is in now a Book of “Dust” 🙂

and gradually I jumped into other work related things and  couldn’t move forward with the idea.

But recently I had a brief conversation with a person who had experience in F# and while talking he mentioned that F# koans in git hub as a nice way to learn F#.

So I went to github , searched and  downloaded the project and fired up my VS 2015 and it straight away showed me the  first “FILL ME IN” puzzle.

I now have completed till AboutTheStockExample and to me this seemed like a nice way of learning F#.

I will tell you why I liked the idea.

When you are watching a video or reading a book, you tend to feel like, oh I already knew that, lets skip the part.

But in learning by debugging, now we are faced with a challenge;  and we need to get out of the problem. For a coder, trying to solve a problem is the most enjoyable thing (IMO); where he is unraveling all the mysteries behind the badly behaving piece and trying to put it back properly.

So here goes my try on stock solution:- btw, if some one is reading this and can suggest a better solution , it will be really helpful.

[<Koan(Sort = 15)>]
module ``about the stock example`` =
    
    let stockData =
        [ "Date,Open,High,Low,Close,Volume,Adj Close";
          "2012-03-30,32.40,32.41,32.04,32.26,31749400,32.26";
          "2012-03-29,32.06,32.19,31.81,32.12,37038500,32.12";
          "2012-03-28,32.52,32.70,32.04,32.19,41344800,32.19";
          "2012-03-27,32.65,32.70,32.40,32.52,36274900,32.52";
          "2012-03-26,32.19,32.61,32.15,32.59,36758300,32.59";
          "2012-03-23,32.10,32.11,31.72,32.01,35912200,32.01";
          "2012-03-22,31.81,32.09,31.79,32.00,31749500,32.00";
          "2012-03-21,31.96,32.15,31.82,31.91,37928600,31.91";
          "2012-03-20,32.10,32.15,31.74,31.99,41566800,31.99";
          "2012-03-19,32.54,32.61,32.15,32.20,44789200,32.20";
          "2012-03-16,32.91,32.95,32.50,32.60,65626400,32.60";
          "2012-03-15,32.79,32.94,32.58,32.85,49068300,32.85";
          "2012-03-14,32.53,32.88,32.49,32.77,41986900,32.77";
          "2012-03-13,32.24,32.69,32.15,32.67,48951700,32.67";
          "2012-03-12,31.97,32.20,31.82,32.04,34073600,32.04";
          "2012-03-09,32.10,32.16,31.92,31.99,34628400,31.99";
          "2012-03-08,32.04,32.21,31.90,32.01,36747400,32.01";
          "2012-03-07,31.67,31.92,31.53,31.84,34340400,31.84";
          "2012-03-06,31.54,31.98,31.49,31.56,51932900,31.56";
          "2012-03-05,32.01,32.05,31.62,31.80,45240000,31.80";
          "2012-03-02,32.31,32.44,32.00,32.08,47314200,32.08";
          "2012-03-01,31.93,32.39,31.85,32.29,77344100,32.29";
          "2012-02-29,31.89,32.00,31.61,31.74,59323600,31.74"; ]
       

[<Koan>]
    let YouGotTheAnswerCorrect() =
         
         let returnSecondTupleValue(tupleValue1,tupleValue2)=
            tupleValue2

         let getMaxVarianceOfStock =
            stockData.Tail
            |> Seq.map(fun t -> t.Split(','))
            |> Seq.map(fun t -> t.[0],abs(System.Double.Parse(t.[1])- System.Double.Parse(t.[4])))
            |> Seq.maxBy returnSecondTupleValue

         let result, _ =  getMaxVarianceOfStock
        
         AssertEquality "2012-03-13" result

I will explain getMaxVarianceOfStock and how I struggled to get this one correct.

mostly confused with brackets and how lambda functions needs to be constructed, I put the logic for splitting returning a tuple into the first lambda function and then VS started to complain about missing brackets and expected IN but found !!! what was that.

Then I understood that I cannot attack this in one go and have to find out how Seq.map and lambdas work, so stack-overflow to the rescue;  got the sufficient hints to construct the first Seq.map, and then things started to fall in place.

So what this function currently does is it splits Seq String[] to Seq  String[][] then the result is fed to my second sequence map which converts the inner array to a tuple format and now my resultant sequence looks like  Seq (String,double) ;yes a sequence of tuples.

Now what I do is, I feed the result to  a Seq.maxBy which accepts a function, I went this route because in the final result I need the date string , but for computing max I just need the second tuple value;so returnSecondTupleValue just returns the absolute values we calculated and then applies max function on to it and returns back the tuple which had the maximum value.

Voila, test passed. Oh by the way , I used stockData.Tail because I don’t want the header to mess up my conversion logic in constructing tuple.

Wow! I feel like, I have used a lot of LINQ projection (keeping C# world in mind)  statements invisibly.  But really tidy and readable; impressed with F#.

Static variable in ASP.NET web application and their scope

What is the impact of a static variable in an ASP.NET application?

Imagine you are storing the user id of current logged in use in a static variable after checking the permissions?

Like

bool isAdminRole= CheckRole(Tools.CurrentUserID,PAGE_SPECIFIC_ROLE);

and in let’s say in Tools.cs we have

public static int CurrentUserID;
public static function  bool CheckRole(int roleId,string roleName)
{
DBService dbService=new DBService();
var isRolePresent=dbService.CheckRole(roleId,roleName);
return isRolePresent;
}

When the user login to the application  the below function is called to set the user id to that static variable.

public void SetUserIdForAuthenticatedUser(Guid userId)
{
if(CurrentUserID <=0 )
{
DBService dbService=new DBService();
int currentUserId=dbService.GetUserIdForAuthenticatedUser(userId);
CurrentUserID=currentUserId;
}
}

So will this be a problem?

Imagine first a user with less privileges logs into the system first and then the user with higher privileges logs into the system. What is going to happen? The user with higher privileges will not be able to utilise all the functionalities of the app, because of this static variable? But why?

That is because, the static variable will be persisted the user id of the person with lesser privileges than the user id of the subject. When our subject (who had proper role) tried to access the page, the user id to check permissions were that of the person with lesser privileges.

The code which checks UserId >0 really projected the problem and if that ‘if statement’ wasn’t there, then this problem might not have bubbled as the way it happened. That is if user id is greater than zero, we are no re-calculating the user id for the logged in user and the old user id will be used for further function calls including the one which checks for permissions… Oops.
But what happens when we restart the app pool? If we restart the app pool, the static variable will be reset and things will seem to be working as normal.

But then how to resolve the issue? Don’t make that as a static variable.

                                                Experience .

My friend, technology will come and go;  but nothing can beat the experience you get from working in the industry.

 

Thinking in F# [A bubble sort try]

This is my try to implement a bubble sort in F#

module BubbleSorterTry
open System.Linq

let Sort (items:int[])=
for index in [0..items.Length-1] do
for innerIndex = index+1 to items.Length-1 do
if items.[index] > items.[innerIndex] then
let temp= items.[index]
items.[index]<-items.[innerIndex]
items.[innerIndex]<-temp
items

The things I learned:

1. There is no do while in F#

2. If your indentation is not proper, you will get confused about what all the error was about

3. By reading through F# basics, I came to understand that , this is not how you think  in F# way. I was just trying to mimic some code in C# to F# . There are Array.iter head and tail , match methods to iterate write this in a concise way.

But for now I will leave it here, so that when I come back , I will know how much I have progressed 🙂

 

Web Api testing using OWIN test server

This article demonstrates the creation of an OWIN test server and hooking it to Web API defined in another project.

First of all my project structure is like this.Project structure

 

 

Here AgencyApiBuilder.WebApi contains my controllers and AgencyApiBuilder .WebApi.Tests contains my tests.

using Microsoft.Owin.Testing;

is the Nuget package to install in our test project to start the magic.

The system under test (SUT) is this method from AgencyController.cs

[HttpPost]
public bool Register(AgencyUserModel user)
{
   IdentityUserClaim claim = new IdentityUserClaim {ClaimType = ClaimTypes.Role, ClaimValue = "Agency"};
   user.ApiUser=new ApiUser();
   user.ApiUser.Claims.Add(claim)
   user.ApiUser.UserName = user.UserName;
   user.ApiUser.Email = user.Email;
   claim.UserId = user.ApiUser.Id;
   var registrationStatus = _apiRegistrationService.RegisterApiUser(user);   
   return registrationStatus.Succeeded;
}

Here this code is all about registering an agency user into the system. Instead of returning object like this we should be returning proper HttpResponseMessage rather than Web API wrapping it around for us; but that’s not the intent of this post; and so moving on to test project now.

[TestMethod]
public void RegisterAnAgency_AllFieldsIncluded()
{
using (var server = TestServer.Create())
{
  using (var client = new HttpClient(server.Handler))
 {
   AgencyUserModel newUser = new AgencyUserModel();
   newUser.UserName = "newAgencyAdmin";
   newUser.Password = "new@kilmno";
   newUser.Email = "admin@agencykkkk.co.uk";
   newUser.PhoneNumber = "123456789";
   newUser.AgencyName = "New agency";
   newUser.AgencyRegistrationNumber = "ABC12345";
   var response =
           client.PostAsJsonAsync("http://localhost:8081/api/Agency/Register", newUser).Result;
   String jsonContent = response.Content.ReadAsStringAsync().Result;
   Boolean result = JsonConvert.DeserializeObject(jsonContent);
   Assert.AreEqual(result,true);
 }
}
}

So let’s dissect this function statement by statement.

What is more self-explanatory than this image below?
Test configuration

Here TestConf is the class containing the function which configures the OWIN pipeline.

So why all this pain?

This is for testing our API calls without even running the Web API in debug mode or host the web API in IIS.

OWIN’s TestServer.create helps us to achieve that;

So next part explores TestConf where all the magic happens.

public void Configuration(IAppBuilder app)
{
   WebApiApplication application=new WebApiApplication();
   HttpConfiguration config = new HttpConfiguration();
   config.Routes.MapHttpRoute(name: "Default", routeTemplate:"api/{controller}/{id}/{action}", 
                            defaults: new { id  =     RouteParameter.Optional, action = RouteParameter.Optional });
   var httpDependencyResolver = new        WindsorHttpDependencyResolver(application.container);
   config.DependencyResolver = httpDependencyResolver;
   app.UseWebApi(config);
}

Here I am telling the server to route all api/controller routes resolution through my dependency injection container (Castle Windsor).

var httpDependencyResolver = new WindsorHttpDependencyResolver(application.container);
config.DependencyResolver = httpDependencyResolver;

WindsorHttpDependencyResolver class will help us to resolve all the controller method dependencies and contains all the helper methods to do so.

http://stackoverflow.com/questions/11639169/asp-net-mvc-4-rc-with-castle-windsor

I am not able to attribute this to its original owner, but the intention is clear; that is to resolve the dependencies and in our case, these dependencies are served by the Windsor container from our WebApiApplication.

The actual resolution happens in my WebApi project where I have setup my dependency container and told it to resolve my dependency graph like this.

public class DependencyResolver : IWindsorInstaller
{
   public void Install(IWindsorContainer container, IConfigurationStore store)
 
{
       container.Register(Classes.FromThisAssembly().BasedOn<ApiController>().LifestyleTransient());
container.Register(Component.For<ApiDbContext>.DependsOn(new {connectionString                         =GetConnectionString()}).Named("dbContext"));           
  container.Register(Component.For<UserRegistrationRepository>.DependsOn("dbContext").Named("userRegRepository"));
container.Register(Component.For<ApiUserRegistrationService>.DependsOn("userRegRepository"));

}
}

This is actually distilled from the remarks mentioned in Mark Seeman‘s blog posts about dependency injection in Web Api. If you are interested reading in depth about how Web Api project and DI’s mix , please refer to this link
http://blog.ploeh.dk/2012/09/28/DependencyInjectionandLifetimeManagementwithASP.NETWebAPI/

Now we tell the config file to use this resolver to resolve the dependencies as and when it comes using the statement.

config.DependencyResolver = httpDependencyResolver;

That is it pretty much done and so we now have a test server, which will take all the routes api/{controller}/{id}/{action}” as its own and starts to handle the request.

Now back to our test method….

using (var client = new HttpClient(server.Handler))

This tells the http client to use server’s message handler and this will intercept all the messages sent by http client to the test server instead of pinging the actual server to retrieve the result.

We can also use server.HttpClient which creates a new http client and submits it the OWIN test server; instead of our statement.

Now we send the actual POST request to the controller via

client.PostAsJsonAsync("http://localhost:8081/api/Agency/Register", newUser)

This posts the newUser object as a JSON to Agency/Register which happens to be in our Web Api project.

So this is our simple setup, which can be used to test actual API calls to our WebApi projects. Another technique to test WebApi is directly invoking the controller method, but I wanted to mimic my actual API calls because, soon I have to use these calls from mobile end point.

This is a code sample from the system I mentioned in the skeletal walk through in my previous post.

Walking skeleton for a Web API implementation ; continued part 2

My main project theme is surrounded about providing an API to my clients and so I decided to validate my technical know how.  I am  proficient in .NET C# and so my options lie in creating project in Visual Studio .  What are my options for creating an API?

  1. Webservices ASMX
  2. WCF
  3. Web Api

In my previous projects, I have worked with both ASMX (SOAP based) and WCF. ASMX   has been superseded by  WCF ( support of various protocols) and  both are very flourished formats with various resources for help; where as  Web API is a relatively new concept and light weight  when compared to WCF.

In this project my main aim is providing an external website where clients can login and add resources. These resources will be then utilised by external mobile clients.

  1. So my  communication protocol will be HTTP .
  2. I need it to work in both web browser as well as mobile clients.
  3. As it has to work on mobile clients, I need  to ensure that less amount of data and fast response needs to be guaranteed to my clients.

Keeping all this in my mind and reading through all the help articles I can get to know more about WCF and Web Api , at this time I have decided to develop it using web API.

My only concern though is how to implement security properly in Web API, because I have not created big REST api’s which require authentication facilities. But expanding my working skeleton to include that might slow down my project and so I have decided to start from the very crux;  that is how to create a Web API project and consume it  as my starting point.

More about how I have approached this will be in the next series; and I promise more code samples.

A walking skeleton with WebApi concept introduction

Recently I was planning on a project which focuses on creating RESTful services for use in my new project and thought to share how I approached it. I was trying to approach it using the Test Drive Development(TDD) and creating a walking skeleton which will help me explore the core concept in my project.

So what is a walking skeleton:- It is the lean form of your end product which validates your system under development.

This is particularly a good way to understand what all technical risks you are going to face when developing the system.  I think I have seen the benefit of implementing this in my current project, because  instead of simply creating a project and working my way through I have in fact explored my technical risks when implementing my project.

If you are interested in learning more about walking skeleton, I have two resources which I found useful.

  1. Pluralsight resource by Mark Seemann 
  2. Growing object oriented  software  (examples are in Java, but the core concepts are worth the read)

I will detail my walking skeleton soon which will cover how WebApi project is implemented using TDD.

 

Passing a connection string directly to Entity Framework context

This is a small tip on passing , connection string to the entity framework context initialization.

In all the code examples found online, we can see that the database connection string name gets passed to EF context like this.

using (var context = new DatabaseContext(connectionStringName))

But there is another constructor for initializing dbcontext using direct connection string.

DbContext(DbConnection, Boolean)

This according to MSDN

Constructs a new context instance using the existing connection to connect to a database. The connection will not be disposed when the context is disposed if contextOwnsConnection is false

So next time while you are coding; make use of this.

Kudos.

Book review – Adaptive code via C# – Agile coding with design patterns and SOLID principles

As the title says, this book tries to give us an understanding of how to do agile coding in C#.

Gary McLean Hall , the author of this book;  presents us an introduction of what agile means and covers the really basic idea of what adaptive code is all about.

This book is divided into three parts which focuses respectively on agile foundation, SOLID code and a sample which demonstrates how agile coding is done.

The first chapter is devoted to what agile is and then sets the stage for what adaptive code means by focusing on how dependencies and layering is done properly.

The next chapter is all about the importance of interfaces and how design patterns get involved in the mix. He briefly touches up on how to use design patterns to evolve /refactor the design and shows how testing is improved with this refactoring.

My favourite module is the next one which revolves around SOLID principles. He first presents the idea of Single Responsibility Principle (SRP) and how decorator, strategy and template patterns help in refactoring for adhering to SRP.

The next chapter is all about Open Closed Principle and the way he explains about the hard rule of no modification when a bug fix occurs; is worth mentioning. He concludes the chapter by discussing how much abstraction is too much.

The next chapter on Liskov’s Substitution Principle (LSP) is elaborated perfectly and he explains about the rules which can be used to verify whether LSP is violated or not. He combines this with the idea of code contracts in .NET and I feel that this is the best thought about chapter.

The importance of interface segregation is mentioned in the next one and then he describes how often interfaces can lose cohesion if they are not well defined. This chapter serves the purpose, but has some loose ends like where he explains about the single method interfaces; but just glosses over the theory and feels like a sore thumb.

In Dependency Injection chapter, he explains the importance of separating the dependencies and how it can be achieved using inversion of control containers. The author then mentions about the Register, Resolve, and Release pattern in IoC containers and give examples of composing object graphs in MVC, WPF composition roots. He summarises the chapter by examining the pros and cons of convention and configuration type dependency resolution.

He then continues on to build a sample application which is done the agile way. The conversations between team members detail the agile process and gives us an overview of how things are done in the real world. This chapter brilliantly captures  what the author has theoretically mentioned in all the previous chapters.

Overall I find this book easier to follow and although this book covers two separate scenarios, software development process and how to write good code; the design and implementation scenarios in the last chapters combines the two concepts beautifully and captures the author’s intention in writing the book.

My two cents: – If you are a developer and you sometimes wonder why some things are done the way it is in relation to structuring your project or even SOLID design ; I think this books clears out those grey areas. After reading this, we will feel much more confident about the purpose of DESIGN.

You can buy this book from Amazon at http://www.amazon.co.uk/Adaptive-Code-via-Interface-Principles/dp/0735683204/

Basics of refactoring code

I will just start of with a bit of code and then we will try to reason about changing the code to meet customer demand.

Taken from an MVC application.

This is my UI layer, which has a controller, which calls a service method to retrieve a file for download.

Controller action


public ActionResult GetCSVFile()
{
var csvFile   =    _service.GetFile(datatable,payrollexportType);
return csvFile;
}

Business logic assembly — service layer

I have hidden away the file generation code in service layer to focus on the problem.


public File GetFile(Datatable hugeTable,PayrollExportType exportType)
{
switch(exportType)
{
case ExportType.Sage:
return SageCSVFormatter();
break;
case ExportType.QuickBooks:
return QuckBooksCSVFormatter();
break;
}
}

Do you find anything wrong with the code?

Perhaps in isolation, this code is never wrong?

What If we want this code to output one more CSV of a different payroll export type?

Can we introduce a new file format download without affecting the existing functionality and thereby reducing the impact of code change?

With no look at refactoring

  1. We have to include one more export type enum.
  2. We extend our switch statement to add one more case and provide a function to output the necessary CSV file.

Is this kind of extensions on code ideal?

But wait, why we are taking all this pain, when we can do simple two line code change (as you mentioned above) and then forget about it. But consider this as a part of large system and there are many modules which depends on this service method. Any change to this service method shakes up its clients knowingly or unknowingly. And so the client functionality also needs to be re tested now to make sure that everything works fine.

Our goal is to minimize these kind of ripple effects.

In order to do that, we need to make sure that service layer function does not directly depend of CSV file generation and handle that work somewhere else. So in effect, whatever CSV File is wanted, the service layer is oblivious about how it is generated, but just gives you back the file.

What is the use, if somebody else has to do the job, then let it be the service layer function itself?

Can’t it be? Yes you can, but then you have to make sure all the dependent code also works well by testing (but remember; it can span several modules across various boundaries)

By this service method not knowing deeply about how CSV is generated, then some appropriate method has to take care of this CSV file generation and the service method just acts as an orchestrator/ collaborator.

By this technique, the service method can now not worry about any new CSV file formats and just get back the CSV file the clients just need. So we never touch that part of the code. But wait, in order to do that; am I not modifying some other parts of the code? Well not; if you can design the system in a proper way you will be just appending to the code and not modifying it. Design how? Answer is; by abstraction. If we abstract away the CSV file generation to some other part of the code, then our service method needn’t be aware of how to generate a CSV file and just sit there peacefully.

Mm, let me think of a real life [made up] story.

Let us say, a person goes to the toy shop and buys a toy dog. If that toy had just a single key press to start barking, hurray we are good. Let’s say, he goes and buys another toy dog, but this time the toy has three levers and switch to make it bark? The person has to know which all keys to turn in which all ways to make it bark, isn’t it?

What if all the toys from the shop has a single button which makes the dog bark, then the person doesn’t have to worry about studying how the hell the toy barks. Just press a single button and hurray the toy barks. And for any toy now the person knows that, he just needs to press the button and it will bark.

Hey don’t divert… you said something like appending to the code.

Appending? Aah? There you go my friend, by inheritance.

Ok let’s look at the diagram first

Inheritance and SOLID
Inheritance and SOLID

Here I have a base class named ICSVGenerator and all my appending is done by extending this abstraction via inheritance.

And our service method will now depend on this ICSVGenerator and will be oblivious of which specific CSV format the client needs. By this technique, new CSV files can be generated without modifying this existing orchestration code.

Hey wait, you promised no code change. Well yes and no. If I am depending on this interface for service method, all my clients who is dependent on my service method has to change also. So how will you reduce this? Again the same principle, abstract away this dependency.

We can always extent to a limit, because we are not sooth sayers to predict what is going to happen next. This is not a onetime process, refactoring needs to be done as the code grows.

I have not yet purposely introduced the term SOLID design because without knowing the purpose there is no point in introducing the concept. So why was SOLID introduced? It is a set of design guidelines just to guard as against all the possible ill effects which can occur while we design the system.

I have also not explicitly stated dependency inversion (ICSVGenerator now handles the CSV generation). Dependency injection (appropriate ICSVGenerator type is injected at run time by dependency injection frameworks).

This sums up on various concepts including

  1. Abstraction, Inheritance and polymorphism
  2. SOLID principles
  3. Open closed principle to be specific
  4. Strategy pattern