Thursday, January 30, 2020

Asp.net page getting too slow to load? Cause of getting page slow

Tips For Web Application

  1. Turn off Tracing unless until requiredTracing is one of the wonderful features which enables us to track the application's trace and the sequences. However, again it is useful only for developers and you can set this to "false" unless you require to monitor the trace logging.
    How it affects performance
    Enabling tracing adds performance overhead and might expose private information, so it should be enabled only while an application is being actively analyzed.
    Solution
    When not needed, tracing can be turned off using:

    <trace enabled="false" requestLimit="10" pageoutput="false" traceMode="SortByTime" localOnly="true">
  2. Turn off Session State, if not requiredOne extremely powerful feature of ASP.NET is its ability to store session state for users, such as a shopping cart on an e-commerce site or a browser history.
    How it affects performance
    Since ASP.NET manages session state by default, you pay the cost in memory even if you don't use it, i.e., whether you store your data in-process or on state server or in a SQL database, session state requires memory and it's also time consuming when you store or retrieve data from it.
    Solution
    You may not require session state when your pages are static or when you do not need to store information captured in the page.
    In such cases where you need not use session state, disable it on your web form using the directive:
       <@%Page EnableSessionState="false"%> In case you use the session state only to retrieve data from it and not to update it, make the session state read only by using the directive:
       <@%Page EnableSessionState ="ReadOnly"%>
  3. Disable View State of a Page if possibleView state is a fancy name for ASP.NET storing some state data in a hidden input field inside the generated page. When the page is posted back to the server, the server can parse, validate, and apply this view state data back to the page's tree of controls.
    View state is a very powerful capability since it allows state to be persisted with the client and it requires no cookies or server memory to save this state. Many ASP.NET server controls use view state to persist settings made during interactions with elements on the page, for example, saving the current page that is being displayed when paging through data.
    How it affects performance
    1. There are a number of drawbacks to the use of view state, however. 
    2. It increases the total payload of the page both when served and when requested. There is also an additional overhead incurred when serializing or deserializing view state data that is posted back to the server.
    3. View state increases the memory allocations on the server. Several server controls, the most well known of which is the DataGrid, tend to make excessive use of view state, even in cases where it is not needed.

    Solution

    Pages that do not have any server postback events can have the view state turned off.
    The default behavior of the ViewState property is enabled, but if you don't need it, you can turn it off at the control or page level. Within a control, simply set the EnableViewState property to false, or set it globally within the page using this setting:
      <%@ Page EnableViewState="false" %> If you turn view state off for a page or control, make sure you thoroughly test your pages to verify that they continue to function correctly.
  4. Set debug=false in web.configWhen you create the application, by default this attribute is set to "true" which is very useful while developing. However, when you are deploying your application, always set it to "false".
    How it affects performance
    Setting it to "true" requires the PDB information to be inserted into the file and this results in a comparatively larger file and hence processing will be slow.
    Solution
    Therefore, always set debug="false" before deployment.
  5. Avoid Response.RedirectResponse.Redirect() method simply tells the browser to visit another page.
    How it affects performance
    Redirects are also very chatty. They should only be used when you are transferring people to another physical web server.
    Solution
    For any transfers within your server, use .transfer! You will save a lot of needless HTTP requests. Instead of telling the browser to redirect, it simply changes the "focus" on the Web server and transfers the request. This means you don't get quite as many HTTP requests coming through, which therefore eases the pressure on your Web server and makes your applications run faster.
    Tradeoffs
    1. ".transfer" process can work on only those sites running on the server. Only Response.Redirectcan do that.
    2. Server.Transfer maintains the original URL in the browser. This can really help streamline data entry techniques, although it may make for confusion when debugging
    5. A) To reduce CLR Exceptions count, use Response.Redirect (".aspx", false) instead ofresponse.redirect (".aspx").
  1. Use the String builder to concatenate string
    How it affects performance
    String is Evil when you want to append and concatenate text to your string. All the activities you do to the string are stored in the memory as separate references and it must be avoided as much as possible.
    i.e. When a string is modified, the run time will create a new string and return it, leaving the original to be garbage collected. Most of the time, this is a fast and simple way to do it, but when a string is being modified repeatedly, it begins to be a burden on performance: all of those allocations eventually get expensive.
    Solution
    Use String Builder whenever string concatenation is needed so that it only stores the value in the originalstring and no additional reference is created.
  2. Avoid throwing exceptions
    How it affects performance
    Exceptions are probably one of the heaviest resource hogs and causes of slowdowns you will ever see in web applications, as well as windows applications.
    Solution
    You can use as many try/catch blocks as you want. Using exceptions gratuitously is where you lose performance. For example, you should stay away from things like using exceptions for control flow.
  3. Use Finally Method to kill resources
    1. The finally method gets executed independent of the outcome of the Block.
    2. Always use the finally block to kill resources like closing database connection, closing files and other resources such that they get executed independent of whether the code worked in Try or went toCatch.
  4. Use Client Side Scripts for validationsUser Input is Evil and it must be thoroughly validated before processing to avoid overhead and possible injections to your applications.
    How It improves performance
    Client site validation can help reduce round trips that are required to process user's request. In ASP.NET, you can also use client side controls to validate user input. However, do a check at the Server side too to avoid the infamous JavaScript disabled scenarios.
  5. Avoid unnecessary round trips to the server
    How it affects performance
    Round trips significantly affect performance. They are subject to network latency and to downstream server latency. Many data-driven Web sites heavily access the database for every user request. While connection pooling helps, the increased network traffic and processing load on the database server can adversely affect performance.
    Solution
    1. Keep round trips to an absolute minimum
    2. Implement Ajax UI whenever possible. The idea is to avoid full page refresh and only update the portion of the page that needs to be changed
  6. Use Page.ISPostBackMake sure you don't execute code needlessly. Use Page.ISPostBack property to ensure that you only perform page initialization logic when a page is first time loaded and not in response to client postbacks.
  7. Include Return Statements with in the Function/Method
    How it improves performance
    Explicitly using return allows the JIT to perform slightly more optimizations. Without a return statement, each function/method is given several local variables on stack to transparently support returning values without the keyword. Keeping these around makes it harder for the JIT to optimize, and can impact the performance of your code. Look through your functions/methods and insert return as needed. It doesn't change the semantics of the code at all, and it can help you get more speed from your application.
  8. Use Foreach loop instead of For loop for String IterationForeach is far more readable, and in the future it will become as fast as a For loop for special cases likestrings. Unless string manipulation is a real performance hog for you, the slightly messier code may not be worth it.
  9. Avoid Unnecessary Indirection
    How it affects performance
    When you use byRef, you pass pointers instead of the actual object.
    Many times, this makes sense (side-effecting functions, for example), but you don't always need it. Passing pointers results in more indirection, which is slower than accessing a value that is on the stack.
    Solution
    When you don't need to go through the heap, it is best to avoid it there by avoiding indirection.
  10. Use "ArrayLists" in place of arrays
    How it improves performance
    An ArrayList has everything that is good about an array PLUS automatic sizing, Add, Insert, Remove, Sort, Binary Search. All these great helper methods are added when implementing the IList interface.
    Tradeoffs
    The downside of an ArrayList is the need to cast objects upon retrieval.
  11. Always check Page.IsValid when using Validator ControlsAlways make sure you check Page.IsValid before processing your forms when using Validator Controls.
  12. Use PagingTake advantage of paging's simplicity in .NET. Only show small subsets of data at a time, allowing the page to load faster.
    Tradeoffs
    Just be careful when you mix in caching. Don't cache all the data in the grid.
  13. Store your content by using caching
    How it improves performance
    ASP.NET allows you to cache entire pages, fragment of pages or controls. You can also cache variable data by specifying the parameters on which the data depends. By using caching, you help ASP.NET engine to return data for repeated request for the same page much faster.
    When and Why Use Caching
    A proper use and fine tune of caching approach will result in better performance and scalability of your site. However, improper use of caching will actually slow down and consume lots of your server performance and memory usage.
    Good candidate to use caching is if you have infrequent chance of data or static content of web page.
  14. Use low cost authenticationAuthentication can also have an impact over the performance of your application. For example, passport authentication is slower than form-base authentication which in here turn is slower than Windows authentication.
  15. Minimize the number of web server controls
    How it affects performance
    The use of web server controls increases the response time of your application because they need time to be processed on the server side before they are rendered on the client side.
    Solution
    One way to minimize the number of web server controls is by taking into consideration, the usage of HTML elements where they are suited, for example if you want to display static text.
  16. Avoid using unmanaged code
    How it affects performance
    Calls to unmanaged code are a costly marshaling operation.
    Solution
    Try to reduce the number calls between the managed and unmanaged code. Consider doing more work in each call rather than making frequent calls to do small tasks.
  17. Avoid making frequent calls across processesIf you are working with distributed applications, this involves additional overhead negotiating network and application level protocols. In this case, network speed can also be a bottleneck. Try to do as much work as possible in fewer calls over the network.
  18. Cleaning Up Style Sheets and Script Files
    1. A quick and easy way to improve your web application's performance is by going back and cleaning up your CSS Style Sheets and Script Files of unnecessary code or old styles and functions. It is common for old styles and functions to still exist in your style sheets and script files during development cycles and when improvements are made to a website. 
    2. Many websites use a single CSS Style Sheet or Script File for the entire website. Sometimes, just going through these files and cleaning them up can improve the performance of your site by reducing the page size. If you are referencing images in your style sheet that are no longer used on your website, it's a waste of performance to leave them in there and have them loaded each time the style sheet is loaded. 
    3. Run a web page analyzer against pages in your website so that you can see exactly what is being loaded and what takes the most time to load.
  19. Design with ValueTypesUse simple structs when you can, and when you don't do a lot of boxing and unboxing.
    Tradeoffs
    ValueTypes are far less flexible than Objects, and end up hurting performance if used incorrectly. You need to be very careful about when you treat them like objects. This adds extra boxing and unboxing overhead to your program, and can end up costing you more than it would if you had stuck with objects.
  20. Minimize assembliesMinimize the number of assemblies you use to keep your working set small. If you load an entire assembly just to use one method, you're paying a tremendous cost for very little benefit. See if you can duplicate that method's functionality using code that you already have loaded.
  21. Encode Using ASCII When You Don't Need UTFBy default, ASP.NET comes configured to encode requests and responses as UTF-8.
    If ASCII is all your application needs, eliminating the UTF overhead can give you back a few cycles. Note that this can only be done on a per-application basis.
  22. Avoid Recursive Functions / Nested LoopsThese are general things to adopt in any programming language, which consume lot of memory. Always avoid Nested Loops, Recursive functions, to improve performance.
  23. Minimize the Use of Format ()When you can, use toString() instead of format(). In most cases, it will provide you with the functionality you need, with much less overhead.
  24. Place StyleSheets into the HeaderWeb developers who care about performance want browser to load whatever content it has as soon as possible. This fact is especially important for pages with a lot of content and for users with slow Internet connections. When the browser loads the page progressively the header, the logo, the navigation components serve as visual feedback for the user.
    When we place style sheets near the bottom part of the HTML, most browsers stop rendering to avoid redrawing elements of the page if their styles change thus decreasing the performance of the page. So, always place StyleSheets into the Header.
  25. Put Scripts to the end of DocumentUnlike StyleSheets, it is better to place scripts to the end of the document. Progressive rendering is blocked until all StyleSheets have been downloaded. Scripts cause progressive rendering to stop for all content below the script until it is fully loaded. Moreover, while downloading a script, the browser does not start any other component downloads, even on different hostnames.
    So, always have scripts at the end of the document.
  26. Make JavaScript and CSS ExternalUsing external files generally produces faster pages because the JavaScript and CSS files are cached by the browser. Inline JavaScript and CSS increase the HTML document size but reduce the number of HTTP requests. With cached external files, the size of the HTML is kept small without increasing the number of HTTP requests thus improving the performance.

Tips For Database Operations

  1. Return Multiple Resultsets
    If the database code has request paths that go to the database more than once, then these round-trips decrease the number of requests per second your application can serve.
    Solution
    Return multiple resultsets in a single database request, so that you can cut the total time spent communicating with the database. You'll be making your system more scalable, too, as you'll cut down on the work the database server is doing managing requests.
  2. Connection Pooling and Object Pooling
    Connection pooling is a useful way to reuse connections for multiple requests, rather than paying the overhead of opening and closing a connection for each request. It's done implicitly, but you get one pool per unique connection string. Make sure you call Close or Dispose on a connection as soon as possible. When pooling is enabled, calling Close or Dispose returns the connection to the pool instead of closing the underlying database connection.
    Account for the following issues when pooling is a part of your design:
    1. Share connections 
    2. Avoid per-user logons to the database 
    3. Do not vary connection strings 
    4. Do not cache connections
  3. Use SqlDataReader Instead of Dataset wherever it is possible
    If you are reading a table sequentially, you should use the DataReader rather than DataSet. DataReaderobject creates a read only stream of data that will increase your application performance because only one row is in memory at a time.
  4. Keep Your Datasets LeanRemember that the dataset stores all of its data in memory, and that the more data you request, the longer it will take to transmit across the wire.
    Therefore, only put the records you need into the dataset.
  5. Avoid Inefficient queries
    How it affects performance
    Queries that process and then return more columns or rows than necessary waste processing cycles that could best be used for servicing other requests.
    Cause of Inefficient queries
    1. Too much data in your results is usually the result of inefficient queries.
    2. The SELECT * query often causes this problem. You do not usually need to return all the columns in a row. Also, analyze the WHERE clause in your queries to ensure that you are not returning too many rows. Try to make the WHERE clause as specific as possible to ensure that the least number of rows are returned. 
    3. Queries that do not take advantage of indexes may also cause poor performance.
  6. Unnecessary round trips
    How it affects performance
    Round trips significantly affect performance. They are subject to network latency and to downstream server latency. Many data-driven Web sites heavily access the database for every user request. While connection pooling helps, the increased network traffic and processing load on the database server can adversely affect performance.
    Solution
    Keep round trips to an absolute minimum.
  7. Too many open connectionsConnections are an expensive and scarce resource, which should be shared between callers by using connection pooling. Opening a connection for each caller limits scalability.
    Solution
    To ensure the efficient use of connection pooling, avoid keeping connections open and avoid varying connection strings.
  8. Avoid Transaction misuse
    How it affects performance
    If you select the wrong type of transaction management, you may add latency to each operation. Additionally, if you keep transactions active for long periods of time, the active transactions may cause resource pressure.
    Solution
    Transactions are necessary to ensure the integrity of your data, but you need to ensure that you use the appropriate type of transaction for the shortest duration possible and only where necessary.
  9. Avoid Over Normalized tablesOver Normalized tables may require excessive joins for simple operations. These additional steps may significantly affect the performance and scalability of your application, especially as the number of users and requests increases.
  10. Reduce SerializationDataset serialization is more efficiently implemented in .NET Framework version 1.1 than in version 1.0. However, Dataset serialization often introduces performance bottlenecks.
    You can reduce the performance impact in a number of ways:
    1. Use column name aliasing 
    2. Avoid serializing multiple versions of the same data 
    3. Reduce the number of DataTable objects that are serialized
  11. Do Not Use CommandBuilder at Run Time
    How it affects performance
    CommandBuilder objects such as SqlCommandBuilder and OleDbCommandBuilder are useful when you are designing and prototyping your application. However, you should not use them in production applications. The processing required to generate the commands affects performance.
    Solution
    Manually create stored procedures for your commands, or use the Visual Studio® .NET design-time wizard and customize them later if necessary.
  12. Use Stored Procedures Whenever Possible
    1. Stored procedures are highly optimized tools that result in excellent performance when used effectively.
    2. Set up stored procedures to handle inserts, updates, and deletes with the data adapter
    3. Stored procedures do not have to be interpreted, compiled or even transmitted from the client, and cut down on both network traffic and server overhead.
    4. Be sure to use CommandType.StoredProcedure instead of CommandType.Text
  13. Avoid Auto-Generated CommandsWhen using a data adapter, avoid auto-generated commands. These require additional trips to the server to retrieve meta data, and give you a lower level of interaction control. While using auto-generated commands is convenient, it's worth the effort to do it yourself in performance-critical applications.
  14. Use Sequential Access as Often as PossibleWith a data reader, use CommandBehavior.SequentialAccess. This is essential for dealing with blob data types since it allows data to be read off of the wire in small chunks. While you can only work with one piece of the data at a time, the latency for loading a large data type disappears. If you don't need to work the whole object at once, using Sequential Access will give you much better performance.

Tips for ASP.NET Applications Developed using VB

  1. Enable Option Strict and Option Explicit for your pages
    With Option Strict on, you protect yourself from inadvertent late binding and enforce a higher level of coding discipline.
  2. Use early binding in Visual Basic or JScript code
    Visual Basic 6 does a lot of work under the hood to support casting of objects, and many programmers aren't even aware of it. In Visual Basic 7, this is an area out of which you can squeeze a lot of performance.
    Solution
    When you compile, use early binding. This tells the compiler to insert a Type Coercion is only done when explicitly mentioned.
    This has two major effects:
    1. Strange errors become easier to track down.
    2. Unneeded coercions are eliminated, leading to substantial performance improvements.
    3. When you use an object as if it were of a different type, Visual Basic will coerce the object for you if you don't specify. This is handy since the programmer does not have to worry about less code.
  3. Put Concatenations in One Expression
    If you have multiple concatenations on multiple lines, try to stick them all on one expression. The compiler can optimize by modifying the string in place, providing a speed and memory boost. If the statements are split into multiple lines, the Visual Basic compiler will not generate the Microsoft Intermediate Language (MSIL) to allow in-place concatenation.
Some Additional Information might this will alos helps

1. Disable the Debug Mode or Set Debug ="false"
How it affect performance: By default this attribute is "true" when you create new application and is useful when you are developing the application. Debug = true means that pdb information to be inserted into file and this results a larger file size and it's performance issue. Before deployment you should set the following tag

<compilation defaultLanguage="Vb" debug="false">

2. Set trace enabled="false"

How it affects performance: With help of tracing, we can track the application's and the sequences. Enabling tracing adds performance overhead and might expose private information, so it should be enabled only while an application is being actively analyzed. You can turn off tracing using
 <Trace enabled="false" - - - - -/>

3. While developing using Visual Studio.NET

When you set Configurations Option as "debug" mode, it creates pdb file to store the debug information hence before deploying the application set it to the "Release" mode.

You can set using

Select Menu Build -> Configuration Manager -- > Set the configuration option of project to the "Release" mode.

4. Disable the viewstate:

With the help of automatic state management feature, the server control is re-populate their values without writing any code. And it affects the performance. The always set EnableViewState = false when not requires.
For controlFor Page<%@ Page EnableViewState="false" %>
5. Use Caching to improve the performance of your application.
OutputCaching enables your page to be cached for specific duration and can be made invalid based on various parameters that can be specified. The Cache exists for the duration you specify and until that time, the requests do not go to the server and are served from the Cache.

Do not assign cached items a short expiration. Items that expire quickly cause unnecessary turnover in the cache and frequently cause more work for cleanup code and the garbage collector. In case you have static as well as dynamic sections of your page, try to use Partial Caching (Fragment Caching) by breaking up your page into user controls and specify Caching for only those Controls which are more-or-less static.
6. Use appropriate Authentication Mechanism.

Following are the Authentication Modes.

  • None
  • Windows
  • Forms
  • Passport
7. Validate all Input received from the Users.
Validate all Input received from the users at client side to avoid the server round trip.


8. Use Finally Method to kill resources.

Always use the finally block to kill resources like closing database connection, closing files etc.

9Always use the String builder to concatenate string

The memory representation of string is an array of characters, So on re-assigning the new array of Char is formed & the start address is changed. Thus keeping the old string in memory for garbage collector to be disposed. Hence application is slow down. Always use the string builder forconcatenating string.

10. Enable the web gardening for multiprocessors computers:

The ASP.NET process model helps enable scalability on multiprocessor machines by distributing the work to several processes, one for each CPU, each with processor affinity set to its CPU. The technique is called Web gardening, and can dramatically improve the performance of some applications

11. Set Enlist="false" in connection string:

True indicates that the SQL Server connection Pooler automatically enlists the connection in the creation thread's current transaction context. That's why set enlist = false in connection string.

12. Avoid recursive functions / nested loops

13. Always set option strict to "on"

14. Try to avoid Throwing Exceptions.

Found above notes on c-sharpcorner Click Here

Friday, December 27, 2019

Fixing database Is In Use Error While Restoring Database from Backup

I have found an error while restore Sql database like "Error While Restoring Database from Backup"

So i have did below thing to resolve this issue and issue is resolved.

--use master
--Create Database *****_OLD
--alter database *****_OLD set offline with rollback immediate;

After you successful restore execute the below line to make it  available to all.

--use master
--alter database *****_OLD set online with rollback immediate;


In case above steps wont work then change the database name *****_OLD to your .bak file database Name, then i am sure it will works

Thursday, November 7, 2019

Load Balancer

Load balancers

How do load balancers distribute the web traffic? There are several algorithms:
  • Round-robin: each request is assigned to the next server in the list, one server after the other. This is also called the poor man’s load balancer as this is not true load balancing. Web traffic is not distributed according to the actual load of each server.
  • Weight-based: each server is given a weight and requests are assigned to the servers according to their weight. Can be an option if your web servers are not of equal quality and you want to direct more traffic to the stronger ones.
  • Random: the server to handle the request is randomly selected
  • Sticky sessions: the load balancer keeps track of the sessions and ensures that return visits within the session always return to the same server
  • Least current request: route traffic to the server that currently has the least amount of requests
  • Response time: route traffic to the web server with the shortest response time
  • User or URL information: some load balancers offer the ability to distribute traffic based on the URL or the user information. Users from one geographic location region may be sent to the server in that location. Requests can be routed based on the URL, the query string, cookies etc.
Apart from algorithms we can group load balancers according to the technology they use:
  • Reverse Proxy: a reverse proxy takes an incoming request and makes another request on behalf of the user. We say that the Reverse Proxy server is a middle-man or a man-in-the-middle in between the web server and the client. The load balancer maintains two separate TCP connections: one with the user and one with the web server. This option requires only minimal changes to your network architecture. The load balancer has full access to the all the traffic on the way through allowing it to check for any attacks and to manipulate the URL or header information. The downside is that as the reverse proxy server maintains the connection with the client you may need to set a long time-out to prepare for long sessions, e.g. in case of a large file download. This opens the possibility for DoS attacks. Also, the web servers will see the load balancer server as the client. Thus any logic that is based on headers like REMOTE_ADDR or REMOTE_HOST will see the IP of the proxy server rather than the original client. There are software solutions out there that rewrite the server variables and fool the web servers into thinking that they had a direct line with the client.
  • Transparent Reverse Proxy: similar to Reverse Proxy except that the TCP connection between the load balancer and the web server is set with the client IP as the source IP so the web server will think that the request came directly from the client. In this scenario the web servers must use the load balancer as their default gateway.
  • Direct Server Return (DSR): this solution runs under different names such as nPath routing, 1 arm LB, Direct Routing, or SwitchBack. This method forwards the web request by setting the web server’s MAC address. The result is that the web server responds directly back to the client. This method is very fast which is also its main advantage. As the web response doesn’t go through the load balancer, even less capable load balancing solutions can handle a relatively large amount of web requests. However, this solution doesn’t offer some of the great options of other load balancers, such as SSL offloading – more on that later
  • NAT load balancing: NAT, which stands for Network Address Translation, works by changing the destination IP address of the packets
  • Microsoft Network Load Balancing: NLB manipulates the MAC address of the network adapters. The servers talk among themselves to decide which one of them will respond to the request. The next blog post is dedicated to NLB.
Let’s pick 3 types of load balancers and the features available to them:
  • Physical load balancers that sit in front of the web farm, also called Hardware
  • ARR: Application Request Routing which is an extension to IIS that can be placed in front of the web tier or directly on the web tier
  • NLB: Network Load Balancing which is built into Windows Server and performs some basic load balancing behaviour
Load balancers feature comparison
No additional failure points:

This point means whether the loadbalancing solution introduces any additional failure points in the overall network.

Physical machines are placed in front of your web farm and they can of course fail. You can put a multiple of these to minimise the possibility of a failure but we still have this possible failure point.
With ARR you can put the load balancer in front of your web farm on a separate machine or a web farm of load balancers or on the same web tier as the web servers. If it’s on a separate tier then it has some additional load balancing features. Putting it on the same tier adds complexity to the configuration but eliminates additional failure points, hence the -X sign in the appropriate cell.
NLB runs on the web server itself so there are no additional failure points.

Health checks
This feature means whether the load balancer can check whether the web server is healthy. This usually means a check where we instruct the load balancer to periodically send a request to the web servers and expect some type of response: either a full HTML page or just a HTTP 200.
NLB is only solution that does not have this feature. NLB will route traffic to any web server and will be oblivious of the answer: can be a HTTP 500 or even no answer at all.

Caching
This feature means the caching of static – or at least relatively static – elements on your web pages, such as CSS or JS, or even entire HTML pages. The effect is that the load balancer does not have to contact the web servers for that type of content which decreases the response times.
NLB does not have this feature. If you put ARR on your web tier then this feature is not available really as it will be your web servers that perform caching.

SSL offload
SSL Offload means that the load balancer will take over the SSL encryption-decryption process from the web servers which also adds to the overall efficiency. SSL is fairly expensive from a CPU perspective so it’s nice to relieve the web machine of that responsibility and hand it over to the probably lot more powerful load balancer.

NLB doesn’t have this feature. Also, if you put ARR on your web tier then this feature is not available really as it will be your web servers that perform SSL encryption and decryption.
A benefit of this feature is that you only have to install the certificate on the load balancer. Otherwise you must make sure to replicate the SSL certificate(s) on every node of the web farm.

If you go down this path then make sure to go through the SSL issuing process on one of the web farm servers – create a Certificate Signing Request (CSR) and send it to a certificate authority (CA). The certificate that the CA generates will only work on the server where the CSR was generated. 

Install the certificate on the web farm server where you initiated the process and then you can export it to the other servers. The CSR can only be used on one server but an exported certificate can be used on multiple servers. 

There’s a new feature in IIS8 called Central Certificate Store which lets you synchronise your certificates across multiple servers.

Geo location
Physical loadbalancers and ARR provide some geolocation features. You can employ many load balancers throughout the world to be close to your customers or have your load balancer point to different geographically distributed data centers. In reality you’re better off looking at cloud based solutions or CDNs such as Akamai, Windows Azure or Amazon.

Low upfront cost
Hardware load balancers are very expensive. ARR and NLB are for free meaning that you don’t have to pay anything extra as they are built-in features of Windows Server and IIS. You probably want to put ARR on a separate machine so that will involve some extra cost but nowhere near what hardware loadbalancers will cost you.

Non-HTTP traffic
Hardware LBs and NLB can handle non-HTTP traffic whereas ARR is a completely HTTP based solution. So if you’re looking into possibilities to distribute other types of traffic such as for SMTP based mail servers then ARR is not an option.

Sticky sessions
This feature means that if a client returns for a second request then the load balancer will redirect that traffic to the same web server. It is also called client affinity. This can be important for web servers that store session state locally so that when the same visitor comes back then we don’t want the state relevant to that user to be unavailable because the request was routed to a different web server.

Hardware LBs and ARR provide a lot of options to introduce sticky sessions including cookie-based solutions. NLB can only perform IP-based sticky sessions, it doesn’t know about cookies and HTTP traffic.

Your target should be to avoid sticky sessions and solve your session management in a different way – more on state management in a future post. If you have sticky sessions then the load balancer is forced to direct traffic to a certain server irrespective of its actual load, thus beating the purpose of load distribution. Also, if the server that received the first request becomes unavailable then the user will lose all session data and may receive an exception or unexpected default values in place of the values saved in the session variables.

Other types of load balancers

Software
With software load balancers you can provide your own hardware while using the vendor-supported software for load balancing. The advantage is that you can provide your own hardware to meet your load balancing needs which can save you a lot of money. 

Above valuable information I have copied from following link "https://dotnetcodr.com/2013/06/17/web-farms-in-net-and-iis-part-1-a-general-introduction/" for more details you can check this link

Monday, July 15, 2019

Eazy way to understanding Sql Join Concept

CReate table A(Code varchar(50), emp_no varchar(50))
Insert into A values ('101',12222)
Insert into A values ('102',23333)
Insert into A values ('103',34444)
Insert into A values ('104',45555)
Insert into A values ('105',56666)


CReate table B(Code varchar(50), City varchar(50), Country varchar(50))

Insert into B values ('101','Mumbai', 'India')
Insert into B values ('101','Delhi', 'India')
Insert into B values ('101','Hyderabad', 'India')
Insert into B values ('102','Chennai', 'India')
Insert into B values ('103','Kolkata', 'India')


Select * From A;
Select * From B;

Select * From A Inner Join B On A.Code = B.Code; -- Total Rows (5) =>   1(3),2,3
Select * From A Left Join B On A.Code = B.Code; -- Total Rows (7) =>   1(3),2,3,4,5
Select * From A Right Join B On A.Code = B.Code; -- Total Rows (5) =>   1(3),2,3
Select * From A Cross Join B; -- Total Rows (5) =>   1 to all (Repeat)

Thursday, May 30, 2019

Password validation on Asp.Net using regular expration

Case 1: Validation for 8-10 characters with alphabets,numbers and no special characters.

<asp:regularexpressionvalidator controltovalidate="txtPassword" display="Dynamic" errormessage="Password must be 8-10 characters long</br> with at least one numeric character." forecolor="Red" id="RegularExpressionValidator3" runat="server" validationexpression="(?!^[0-9]*$)(?!^[a-zA-Z]*$)^([a-zA-Z0-9]{8,10})$"> </asp:regularexpressionvalidator></asp:textbox>  

Case2: Validation for 8-10 characters with characters,numbers and special characters.
 
<asp:textbox id="txtpasswordwithNoUpperCharacter" runat="server" /> <asp:regularexpressionvalidator :="" display="Dynamic" errormessage="Password must be 8-10 characters long with at least one numeric,</br> one alphabet and one special character." forecolor="Red" id="RegularExpressionValidator2" validationexpression="(?=^.{8,10}$)(?=.*\d)(?=.*[a-zA-Z])(?=.*[!@#$%^&*()_+}{">.<,])(?!.*\s).*$" controltovalidate="txtpasswordwithNoUpperCharacter" runat="server"> </asp:regularexpressionvalidator>  

Case3: Validation for 8-10 characters with characters,numbers,1 upper case letter and special characters.

 <asp:textbox id="txtPasswordWithSpecialCharacter" runat="server"/> <asp:regularexpressionvalidator :="" display="Dynamic" errormessage="Password must be 8-10 characters long with at least one numeric,</br>one upper case character and one special character." forecolor="Red" id="RegularExpressionValidator1" validationexpression="(?=^.{8,10}$)(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#$%^&*()_+}{">.<,])(?!.*\s).*$" controltovalidate="txtPasswordWithSpecialCharacter" runat="server"/>

Important Notes in Csharp and SQL

The break, goto, continue, return and throw statements are known as jump statements.

An Abstract class can has access modifiers like private, protected, internal with class members. But abstract members cannot have private access modifier.

Constant fields or local variables must be assigned a value at the time of declaration and after that they cannot be modified. By default constant are static, hence you cannot define a constant type as static.

Abstract Class :Need to to provide default behaviors as well as common behaviors that multiple derived classes can share and override
http://www.dotnettricks.com/learn/csharp/a-deep-dive-into-csharp-abstract-class

Unlike fields, properties do not denote storage locations and you cannot pass a property as a ref or out paramete

Auto-implemented properties was introduced with C# 3.0, which make property declaration more concise. Unlike standard property, in auto-implemented property, wrapped or backing field is automatically created by the compiler but it is not available for use by
public int Name { get; set; }

A try block is used to throw multiple exceptions that can handle by using multiple catch blocks

More specialized catch block should come before a generalized one. Otherwise specialized catch block will never be executed

Implicit conversion of derived classes to base class is called Upcasting and
Explicit conversion of base class to derived classes is called Downcasting.

Once a delegate is created, the method it is associated will never changes because delegates are immutable in nature

delegate which holds the reference of more than one method is called multi-cast delegate. A multicast delegate only contains the reference of methods which return type is void. The + and += operators are used to combine delegate instances

There are following types of decision making statements in C#
If statement
If-Else statement
If-Else-If statement or ladder
Switch statement

C# introduces a new concept known as Indexers which are used for treating an object as an array. The indexers are usually known as smart arrays in C#

--==============================------------

Super key is a set of one or more than one keys that can be used to identify a record uniquely in a table.Example : Primary key, Unique key, Alternate key are subset of Super Keys

A Candidate Key is a set of one or more fields/columns that can identify a record uniquely in a table. There can be multiple Candidate Keys in one table. Each Candidate Key can work as Primary Key.

Primary key is a set of one or more fields/columns of a table that uniquely identify a record in database table. It can not accept null, duplicate values. Only one Candidate Key can be Primary Key

A Alternate key is a key that can be work as a primary key. Basically it is a candidate key that currently is not primary key

Composite Key is a combination of more than one fields/columns of a table. It can be a Candidate key, Primary key

Unique key is a set of one or more fields/columns of a table that uniquely identify a record in database table. It is like Primary key but it can accept only one null value and it can not have duplicate values

Foreign Key is a field in database table that is Primary key in another table. It can accept multiple null, duplicate values

Views are virtual tables that are compiled at run time. The data associated with views are not physically stored in the view, but it is stored in the base tables of the view. We make views for security purpose since it restricts the user to view some columns/fields of the table(s)

-------=================================-----------------
Data Annotation Validator Attributes 
DataType, DisplayName, DisplayFormat, Required, ReqularExpression, Range, StringLength,
MaxLength, Bind, ScaffoldColumn(specify fields for hiding from editor forms)

 Partial view is like as user control in Asp.Net Web forms that is used for code re-usability. Partial views helps us to reduce code duplication

The main difference between two methods is that the Partial helper method renders a partial view into a string while RenderPartial method writes directly into the response stream instead of returning a string

We can enable the jQuery intellisense support in Visual Studio with MVC Razor by adding vsdoc.js (jquery-1.7.1-vsdoc.js)

Asp.net MVC Request Life Cycle
Browser
Request
Routing: Pattern matching system (RouteTable)
Mvc Handler: initiating the real processing (IHttpHandler)
Controller: MvcHandler uses the IControllerFactory instance and tries to get a IController instance (IControllerFactory )
Action Execution: Controller's ActionInvoker determines which specific action to invoke on the controller (ActionInvoker)
View Result: receives user input, prepares the appropriate response data, and then executes the result
View Engine: xecution of the View Result involves the selection of the appropriate View Engine
View: Action method may returns a text string,a binary file or a Json formatted data.
for more details https://www.dotnettricks.com/learn/mvc/aspnet-mvc-request-life-cycle

Procedure to Search an object in a database, Find a table used in how many Procedures




CREATE PROCEDURE [dbo].[spssearchcode]            
  @text varchar(250),            
  @dbname varchar(64) = 'admin'           
AS BEGIN           
SET NOCOUNT ON;            
            
if @dbname is null           
  Begin           
    --enumerate all databases.            
  DECLARE #db CURSOR FOR Select Name from master..sysdatabases            
  Declare @c_dbname varchar(64)            
            
  OPEN #db FETCH #db INTO @c_dbname            
  While @@FETCH_STATUS <> -1 --and @MyCount < 500            
   Begin           
     execute spssearchcode @text, @c_dbname            
     FETCH #db INTO @c_dbname            
   End              
  CLOSE #db DEALLOCATE #db            
 End --if @dbname is null            
Else           
 begin --@dbname is not null            
  declare @sql varchar(250)            
  --create the find like command            
  select @sql = 'select ''' + @dbname + ''' as db, o.name,m.definition '           
  select @sql = @sql + ' from '+@dbname+'.sys.sql_modules m '           
  select @sql = @sql + ' inner join '+@dbname+'..sysobjects o on m.object_id=o.id'           
  select @sql = @sql + ' where [definition] like ''%'+@text+'%'''           
  execute (@sql)            
 end --@dbname is not null            
END