Sunday, December 20, 2009

ASP.net link for Beginners

Some links for Beginners (Web):

Basic HTML:

http://www.codeproj ect.com/KB/ HTML/htmlbeginne r.aspx

http://www.itechcol lege.com/ courses/HTML/



ASP:

http://www.codeproj ect.com/KB/ asp/aspbegin. aspx

http://www.itechcol lege.com/ courses/ASP/

http://www.codeproj ect.com/KB/ asp/uwe-guestboo k.aspx

http://www.codeproj ect.com/KB/ asp/CDIChatSubmi t.aspx



JavaScript:

http://www.itechcol lege.com/ courses/JavaScri pt/



ASP.Net:

http://www.itechcol lege.com/ courses/ASP- NET/

http://www.asp. net/ (Contains huge video tutorials)



CSS:

http://www.itechcol lege.com/ courses/CSS/

http://www.itechcol lege.com/ courses/AJAX/



State Management (ViewState & Session):

http://www.codeproj ect.com/KB/ aspnet/BegViewSt ate.aspx

http://www.codeproj ect.com/KB/ aspnet/Exploring Session.aspx





And More…….:

http://www.w3school s..com/

http://www.codeproj ect.com/

http://www.codeproj ect.com/KB/ aspnet/Beginners _Walk_Web. aspx

Saturday, December 12, 2009

What is TDD?


The steps of test first design (TFD) are overviewed in the UML activity diagram of Figure 1. The first step is to quickly add a test, basically just enough code to fail. Next you run your tests, often the complete test suite although for sake of speed you may decide to run only a subset, to ensure that the new test does in fact fail. You then update your functional code to make it pass the new tests. The fourth step is to run your tests again. If they fail you need to update your functional code and retest. Once the tests pass the next step is to start over (you may first need to refactor any duplication out of your design as needed, turning TFD into TDD).

TDD = Refactoring + TFD.

TDD completely turns traditional development around. When you first go to implement a new feature, the first question that you ask is whether the existing design is the best design possible that enables you to implement that functionality. If so, you proceed via a TFD approach. If not, you refactor it locally to change the portion of the design affected by the new feature, enabling you to add that feature as easy as possible. As a result you will always be improving the quality of your design, thereby making it easier to work with in the future.


Instead of writing functional code first and then your testing code as an afterthought, if you write it at all, you instead write your test code before your functional code. Furthermore, you do so in very small steps – one test and a small bit of corresponding functional code at a time. A programmer taking a TDD approach refuses to write a new function until there is first a test that fails because that function isn’t present. In fact, they refuse to add even a single line of code until a test exists for it. Once the test is in place they then do the work required to ensure that the test suite now passes (your new code may break several existing tests as well as the new one). This sounds simple in principle, but when you are first learning to take a TDD approach it proves require great discipline because it is easy to “slip” and write functional code without first writing a new test. One of the advantages of pair programming is that your pair helps you to stay on track.

An underlying assumption of TDD is that you have a unit-testing framework available to you. Agile software developers often use the xUnit family of open source tools, such as JUnit or VBUnit, although commercial tools are also viable options. Without such tools TDD is virtually impossible. Figure 2 presents a UML state chart diagram for how people typically work with the xUnit tools. This diagram was suggested to me by Keith Ray.
Kent Beck, who popularized TDD in eXtreme Programming (XP) (Beck 2000), defines two simple rules for TDD (Beck 2003). First, you should write new business code only when an automated test has failed. Second, you should eliminate any duplication that you find. Beck explains how these two simple rules generate complex individual and group behavior:

*

You design organically, with the running code providing feedback between decisions.
*

You write your own tests because you can't wait 20 times per day for someone else to write them for you.
*

Your development environment must provide rapid response to small changes (e.g you need a fast compiler and regression test suite).
*

Your designs must consist of highly cohesive, loosely coupled components (e.g. your design is highly normalized) to make testing easier (this also makes evolution and maintenance of your system easier too).

For developers, the implication is that they need to learn how to write effective unit tests. Beck’s experience is that good unit tests:

*

Run fast (they have short setups, run times, and break downs).
*

Run in isolation (you should be able to reorder them).
*

Use data that makes them easy to read and to understand.
*

Use real data (e.g. copies of production data) when they need to.
*

Represent one step towards your overall goal.

Thursday, February 12, 2009

Virtual Circuits


Virtual Circuits

The connection through a Frame Relay network between two DTEs is called a virtual circuit (VC). The circuits are virtual because there is no direct electrical connection from end to end. The connection is logical, and data moves from end to end, without a direct electrical circuit. With VCs, Frame Relay shares the bandwidth among multiple users and any single site can communicate with any other single site without using multiple dedicated physical lines.

There are two ways to establish VCs:

SVCs, switched virtual circuits, are established dynamically by sending signaling messages to the network (CALL SETUP, DATA TRANSFER, IDLE, CALL TERMINATION).
PVCs, permanent virtual circuits, are preconfigured by the carrier, and after they are set up, only operate in DATA TRANSFER and IDLE modes. Note that some publications refer to PVCs as private VCs.



In the figure, there is a VC between the sending and receiving nodes. The VC follows the path A, B, C, and D. Frame Relay creates a VC by storing input-port to output-port mapping in the memory of each switch and thus links one switch to another until a continuous path from one end of the circuit to the other is identified. A VC can pass through any number of intermediate devices (switches) located within the Frame Relay network.

The question you may ask at this point is, "How are the various nodes and switches identified?"


VCs provide a bidirectional communication path from one device to another. VCs are identified by DLCIs. DLCI values typically are assigned by the Frame Relay service provider (for example, the telephone company). Frame Relay DLCIs have local significance, which means that the values themselves are not unique in the Frame Relay WAN. A DLCI identifies a VC to the equipment at an endpoint. A DLCI has no significance beyond the single link. Two devices connected by a VC may use a different DLCI value to refer to the same connection.

Locally significant DLCIs have become the primary method of addressing, because the same address can be used in several different locations while still referring to different connections. Local addressing prevents a customer from running out of DLCIs as the network grows.

This is the same network as presented in the previous figure, but this time, as the frame moves across the network, Frame Relay labels each VC with a DLCI. The DLCI is stored in the address field of every frame transmitted to tell the network how the frame should be routed. The Frame Relay service provider assigns DLCI numbers. Usually, DLCIs 0 to 15 and 1008 to 1023 are reserved for special purposes. Therefore, service providers typically assign DLCIs in the range of 16 to 1007.

In this example, the frame uses DLCI 102. It leaves the router (R1) using Port 0 and VC 102. At switch A, the frame exits Port 1 using VC 432. This process of VC-port mapping continues through the WAN until the frame reaches its destination at DLCI 201, as shown in the figure. The DLCI is stored in the address field of every frame transmitted.

PPP


What is PPP?

Recall that HDLC is the default serial encapsulation method when you connect two Cisco routers. With an added protocol type field, the Cisco version of HDLC is proprietary. Thus, Cisco HDLC can only work with other Cisco devices. However, when you need to connect to a non-Cisco router, you should use PPP encapsulation.

PPP encapsulation has been carefully designed to retain compatibility with most commonly used supporting hardware. PPP encapsulates data frames for transmission over Layer 2 physical links. PPP establishes a direct connection using serial cables, phone lines, trunk lines, cellular telephones, specialized radio links, or fiber-optic links. There are many advantages to using PPP, including the fact that it is not proprietary. Moreover, it includes many features not available in HDLC:

The link quality management feature monitors the quality of the link. If too many errors are detected, PPP takes the link down.
PPP supports PAP and CHAP authentication. This feature is explained and practiced in a later section.


PPP contains three main components:

HDLC protocol for encapsulating datagrams over point-to-point links.
Extensible Link Control Protocol (LCP) to establish, configure, and test the data link connection.
Family of Network Control Protocols (NCPs) for establishing and configuring different network layer protocols. PPP allows the simultaneous use of multiple network layer protocols. Some of the more common NCPs are Internet Protocol Control Protocol, Appletalk Control Protocol, Novell IPX Control Protocol, Cisco Systems Control Protocol, SNA Control Protocol, and Compression Control Protocol.

TDM


Time Division Multiplexing
Bell Laboratories invented time-division multiplexing (TDM) to maximize the amount of voice traffic carried over a medium. Before multiplexing, each telephone call required its own physical link. This was an expensive and unscalable solution. TDM divides the bandwidth of a single link into separate channels or time slots. TDM transmits two or more channels over the same link by allocating a different time interval (time slot) for the transmission of each channel. In effect, the channels take turns using the link.

TDM is a physical layer concept. It has no regard for the nature of the information that is being multiplexed onto the output channel. TDM is independent of the Layer 2 protocol that has been used by the input channels.

TDM can be explained by an analogy to highway traffic. To transport traffic from four roads to another city, you can send all the traffic on one lane if the feeding roads are equally serviced and the traffic is synchronized. So, if each of the four roads puts a car onto the main highway every four seconds, the highway gets a car at the rate of one each second. As long as the speed of all the cars is synchronized, there is no collision. At the destination, the reverse happens and the cars are taken off the highway and fed to the local roads by the same synchronous mechanism.

This is the principle used in synchronous TDM when sending data over a link. TDM increases the capacity of the transmission link by slicing time into smaller intervals so that the link carries the bits from multiple input sources, effectively increasing the number of bits transmitted per second. With TDM, the transmitter and the receiver both know exactly which signal is being sent.

In our example, a multiplexer (MUX) at the transmitter accepts three separate signals. The MUX breaks each signal into segments. The MUX puts each segment into a single channel by inserting each segment into a timeslot.

A MUX at the receiving end reassembles the TDM stream into the three separate data streams based only on the timing of the arrival of each bit. A technique called bit interleaving keeps track of the number and sequence of the bits from each specific transmission so that they can be quickly and efficiently reassembled into their original form upon receipt. Byte interleaving performs the same functions, but because there are eight bits in each byte, the process needs a bigger or longer time slot.

Wednesday, February 11, 2009

WAN Physical Layer Concepts


WAN Devices

WANs use numerous types of devices that are specific to WAN environments, including:

Modem-Modulates an analog carrier signal to encode digital information, and also demodulates the carrier signal to decode the transmitted information. A voiceband modem converts the digital signals produced by a computer into voice frequencies that can be transmitted over the analog lines of the public telephone network. On the other side of the connection, another modem converts the sounds back into a digital signal for input to a computer or network connection. Faster modems, such as cable modems and DSL modems, transmit using higher broadband frequencies.
CSU/DSU-Digital lines, such as T1 or T3 carrier lines, require a channel service unit (CSU) and a data service unit (DSU). The two are often combined into a single piece of equipment, called the CSU/DSU. The CSU provides termination for the digital signal and ensures connection integrity through error correction and line monitoring. The DSU converts the T-carrier line frames into frames that the LAN can interpret and vice versa.
Access server-Concentrates dial-in and dial-out user communications. An access server may have a mixture of analog and digital interfaces and support hundreds of simultaneous users.
WAN switch-A multiport internetworking device used in carrier networks. These devices typically switch traffic such as Frame Relay, ATM, or X.25, and operate at the data link layer of the OSI reference model. Public switched telephone network (PSTN) switches may also be used within the cloud for circuit-switched connections like Integrated Services Digital Network (ISDN) or analog dialup.
Router-Provides internetworking and WAN access interface ports that are used to connect to the service provider network. These interfaces may be serial connections or other WAN interfaces. With some types of WAN interfaces, an external device such as a DSU/CSU or modem (analog, cable, or DSL) is required to connect the router to the local point of presence (POP) of the service provider.
Core router-A router that resides within the middle or backbone of the WAN rather than at its periphery. To fulfill this role, a router must be able to support multiple telecommunications interfaces of the highest speed in use in the WAN core, and it must be able to forward IP packets at full speed on all of those interfaces. The router must also support the routing protocols being used in the core.

WAN Technology Overview



WANs and the OSI Model

As described in relation to the OSI reference model, WAN operations focus primarily on Layer 1 and Layer 2. WAN access standards typically describe both physical layer delivery methods and data link layer requirements, including physical addressing, flow control, and encapsulation. WAN access standards are defined and managed by a number of recognized authorities, including the International Organization for Standardization (ISO), the Telecommunication Industry Association (TIA), and the Electronic Industries Alliance (EIA).

The physical layer (OSI Layer 1) protocols describe how to provide electrical, mechanical, operational, and functional connections to the services of a communications service provider.

The data link layer (OSI Layer 2) protocols define how data is encapsulated for transmission toward a remote location and the mechanisms for transferring the resulting frames. A variety of different technologies are used, such as Frame Relay and ATM. Some of these protocols use the same basic framing mechanism, High-Level Data Link Control (HDLC), an ISO standard, or one of its subsets or variants.