Monday, May 5, 2014

F5 Application Delivery Fundamentals Exam Study Guide


F5 101 Application Delivery Fundamentals Exam study Guide

This is the preliminary test that anyone pursuing for any F5 certification. This test is not easy one; if you do not have any experience with F5, please do not waste your money and time.I am attaching the study material, which may help you for passing the exam. The materials are already published in other blogs, I just re arranged in a better manner to understand.Official Blue print is available at http://www.f5.com/pdf/certification/exams/blueprint-app-delivery-fundamentals-exam.pdf.


The Open Systems Interconnection (OSI) reference model describes how information from a software application in one computer moves through a network medium to a software application in another computer. The OSI reference model is a conceptual model composed of seven layers, each specifying particular network functions. The model was developed by the International Organization for Standardization (ISO) in 1984, and it is now considered the primary architectural model for intercomputer communications. The OSI model divides the tasks involved with moving information between networked computers into seven smaller, more manageable task groups.


OSI Model Physical Layer

The physical layer defines the electrical, mechanical, procedural, and functional specifications for activating, maintaining, and deactivating the physical link between communicating network systems. Physical layer specifications define characteristics such as voltage levels, timing of voltage changes, physical data rates, maximum transmission distances, and physical connectors. Physical layer implementations can be categorized as either LAN or WAN specifications.
Figure: Physical Layer Implementations Can Be LAN or WAN Specifications



OSI Model Data Link Layer
The data link layer provides reliable transit of data across a physical network link. Different data link layer specifications define different network and protocol characteristics, including physical addressing, network topology, error notification, sequencing of frames, and flow control. Physical addressing (as opposed to network addressing) defines how devices are addressed at the data link layer. Network topology consists of the data link layer specifications that often define how devices are to be physically connected, such as in a bus or a ring topology. Error notification alerts upper-layer protocols that a transmission error has occurred, and the sequencing of data frames reorders frames that are transmitted out of sequence. Finally, flow control moderates the transmission of data so that the receiving device is not overwhelmed with more traffic than it can handle at one time.
Figure: The Data Link Layer Contains Two Sublayers

The Logical Link Control (LLC) sublayer of the data link layer manages communications between devices over a single link of a network. LLC is defined in the IEEE 802.2 specification and supports both connectionless and connection-oriented services used by higher-layer protocols. IEEE 802.2 defines a number of fields in data link layer frames that enable multiple higher-layer protocols to share a single physical data link. The Media Access Control (MAC) sublayer of the data link layer manages protocol access to the physical network medium. The IEEE MAC specification defines MAC addresses, which enable multiple devices to uniquely identify one another at the data link layer.
Examples of data link protocols are Ethernet for local area networks (multi-node), the Point-to-Point Protocol (PPP), HDLC and ADCCP for point-to-point (dual-node) connections.

OSI Model Network Layer
The network layer defines the network address, which differs from the MAC address. Some network layer implementations, such as the Internet Protocol (IP), define network addresses in a way that route selection can be determined systematically by comparing the source network address with the destination network address and applying the subnet mask.

OSI Model Transport Layer

The transport layer accepts data from the session layer and segments the data for transport across the network. Generally, the transport layer is responsible for making sure that the data is delivered error-free and in the proper sequence. Flow control generally occurs at the transport layer.
Flow control manages data transmission between devices so that the transmitting device does not send more data than the receiving device can process. Multiplexing enables data from several applications to be transmitted onto a single physical link. Virtual circuits are established, maintained, and terminated by the transport layer. Error checking involves creating various mechanisms for detecting transmission errors, while error recovery involves acting, such as requesting that data be retransmitted, to resolve any errors that occur.
The transport protocols used on the Internet are TCP and UDP.

OSI Model Session Layer

The session layer establishes, manages, and terminates communication sessions. Communication sessions consist of service requests and service responses that occur between applications located in different network devices. These requests and responses are coordinated by protocols implemented at the session layer. Some examples of session-layer implementations include Zone Information Protocol (ZIP), the AppleTalk protocol that coordinates the name binding process; and Session Control Protocol (SCP), the DECnet Phase IV session layer protocol.

OSI Model Presentation Layer

The presentation layer provides a variety of coding and conversion functions that are applied to application layer data. These functions ensure that information sent from the application layer of one system would be readable by the application layer of another system. Some examples of presentation layer coding and conversion schemes include common data representation formats, conversion of character representation formats, common data compression schemes, and common data encryption schemes.

Presentation layer implementations are not typically associated with a particular protocol stack. Some well-known standards for video include QuickTime and Motion Picture Experts Group (MPEG). QuickTime is an Apple Computer specification for video and audio, and MPEG is a standard for video compression and coding.
Among the well-known graphic image formats are Graphics Interchange Format (GIF), Joint Photographic Experts Group (JPEG), and Tagged Image File Format (TIFF). GIF is a standard for compressing and coding graphic images. JPEG is another compression and coding standard for graphic images, and TIFF is a standard coding format for graphic images.

OSI Model Application Layer

The application layer is the OSI layer closest to the end user, which means that both the OSI application layer and the user interact directly with the software application.
This layer interacts with software applications that implement a communicating component.
When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. When determining resource availability, the application layer must decide whether sufficient network resources for the requested communication exist. In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer.
Some examples of application layer implementations include Telnet, File Transfer Protocol (FTP), and Simple Mail Transfer Protocol (SMTP).


 

 

Internetwork Addressing

Internetwork addresses identify devices separately or as members of a group. Addressing schemes vary depending on the protocol family and the OSI layer. Three types of internetwork addresses are commonly used: data link layer addresses, Media Access Control (MAC) addresses, and network layer addresses.

Data Link Layer Addresses

A data link layer address uniquely identifies each physical network connection of a network device. Data-link addresses sometimes are referred to as physical or hardware addresses. Data-link addresses usually exist within a flat address space and have a pre-established and typically fixed relationship to a specific device.
End systems generally have only one physical network connection and thus have only one data-link address. Routers and other internetworking devices typically have multiple physical network connections and therefore have multiple data-link addresses.
Figure: Each Interface on a Device Is Uniquely Identified by a Data-Link Address

MAC Addresses

Media Access Control (MAC) addresses consist of a subset of data link layer addresses. MAC addresses identify network entities in LANs that implement the IEEE MAC addresses of the data link layer. As with most data-link addresses, MAC addresses are unique for each LAN interface.


MAC addresses are 48 bits in length and are expressed as 12 hexadecimal digits. The first 6 hexadecimal digits, which are administered by the IEEE, identify the manufacturer or vendor and thus comprise the Organizationally Unique Identifier (OUI). The last 6 hexadecimal digits comprise the interface serial number, or another value administered by the specific vendor. MAC addresses sometimes are called burned-in addresses (BIAs) because they are burned into read-only memory (ROM) and are copied into random-access memory (RAM) when the interface card initializes.
Figure: The MAC Address Contains a Unique Format of Hexadecimal Digits


ARP- Address Resolution Protocol

Address Resolution Protocol (ARP) is the method used in the TCP/IP suite. When a network device needs to send data to another device on the same network, it knows the source and destination network addresses for the data transfer. It must somehow map the destination address to a MAC address before forwarding the data. First, the sending station will check its ARP table to see if it has already discovered this destination station's MAC address. If it has not, it will send a broadcast on the network with the destination station's IP address contained in the broadcast. Every station on the network receives the broadcast and compares the embedded IP address to its own. Only the station with the matching IP address replies to the sending station with a packet containing the MAC address for the station. The first station then adds this information to its ARP table for future reference and proceeds to transfer the data.
When the destination device lies on a remote network, one beyond a router, the process is the same except that the sending station sends the ARP request for the MAC address of its default gateway. It then forwards the information to that device. The default gateway will then forward the information over whatever networks necessary to deliver the packet to the network on which the destination device resides. The router on the destination device's network then uses ARP to obtain the MAC of the actual destination device and delivers the packet.
The Hello protocol is a network layer protocol that enables network devices to identify one another and indicate that they are still functional. When a new end system powers up, for example, it broadcasts hello messages onto the network. Devices on the network then return hello replies, and hello messages are also sent at specific intervals to indicate that they are still functional. Network devices can learn the MAC addresses of other devices by examining Hello protocol packets.
Three protocols use predictable MAC addresses

Network Layer Addresses

A network layer address identifies an entity at the network layer of the OSI layers. Network addresses usually exist within a hierarchical address space and sometimes are called virtual or logical addresses.
The relationship between a network address and a device is logical and unfixed; it typically is based either on physical network characteristics (the device is on a particular network segment) or on groupings that have no physical basis (the device is part of an AppleTalk zone). End systems require one network layer address for each network layer protocol that they support. (This assumes that the device has only one physical network connection.) Routers and other internetworking devices require one network layer address per physical network connection for each network layer protocol supported. For example, a router with three interfaces each running AppleTalk, TCP/IP, and OSI must have three network layer addresses for each interface. The router therefore has nine network layer addresses.
Figure: Each Network Interface Must Be Assigned a Network Address for Each Protocol Supported illustrates how each network interface must be assigned a network address for each protocol supported.

Figure: Each Network Interface Must Be Assigned a Network Address for Each Protocol Supported


Hierarchical Versus Flat Address Space

Internetwork address space typically takes one of two forms: hierarchical address space or flat address space. A hierarchical address space is organized into numerous subgroups, each successively narrowing an address until it points to a single device (in a manner similar to street addresses). A flat address space is organized into a single group (in a manner similar to U.S. Social Security numbers).
Hierarchical addressing offers certain advantages over flat-addressing schemes. Address sorting and recall is simplified using comparison operations. For example, "Ireland" in a street address eliminates any other country as a possible location.

Figure: Hierarchical and Flat Address Spaces Differ in Comparison Operations illustrates the difference between hierarchical and flat address spaces.
Figure: Hierarchical and Flat Address Spaces Differ in Comparison Operations

Address Assignments

Addresses are assigned to devices as one of two types: static and dynamic. Static addresses are assigned by a network administrator according to a preconceived internetwork addressing plan. A static address does not change until the network administrator manually changes it. Dynamic addresses are obtained by devices when they attach to a network, by means of some protocol-specific process. A device using a dynamic address often has a different address each time that it connects to the network. Some networks use a server to assign addresses. Server-assigned addresses are recycled for reuse as devices disconnect. A device is therefore likely to have a different address each time that it connects to the network.

Addresses Versus Names

Internetwork devices usually have both a name and an address associated with them. Internetwork names typically are location-independent and remain associated with a device wherever that device moves (for example, from one building to another). Internetwork addresses usually are location-dependent and change when a device is moved (although MAC addresses are an exception to this rule). As with network addresses being mapped to MAC addresses, names are usually mapped to network addresses through some protocol. The Internet uses Domain Name System (DNS) to map the name of a device to its IP address. For example, it's easier for you to remember www.cisco.com instead of some IP address. Therefore, you type www.cisco.com into your browser when you want to access Cisco's web site. Your computer performs a DNS lookup of the IP address for Cisco's web server and then communicates with it using the network address.

GLOSSARY 


1. FTP (File Transfer Protocol) - Used to transfer files over the internet using TCP/IP.
2. HTTP (Hypertext Transfer Protocol) - Underlining protocol used by the World Wide Web. Allows Web servers and browsers to communicate with each other.
3. SMTP (Simple Mail Transfer Protocol) - Protocol used to send email messages between servers.
4. DNS (Domain Name Service) - An internet service that translates domain names, such as www.yahoo.com, into IP addresses.
5. TFTP (Trivial File Transfer Protocol) - Simplified version of the FTP protocol which has no security features.
6. NFS (Network File System) - Client/Server application designed by SUN MICROSYSTEMS to allow all network users to access files stored on different computer types.
7. Telnet - terminal emulation program that allows you to connect to a server and enter information and commands similar to if you were actually on the server terminal.
8. ASCII - a code for representing English characters as numbers.
9. EBCDIC (Extended Binary-Coded Decimal Interchange Code) - IBM code for representing characters as numbers.
10. MIDI (Musical Instrument Device Interface) - adopted by the electronic music industry for controlling devices, such as synthesizers and sound cards, that emit music.
11. MPEG (Moving Pictures Experts Group) - the family of digital video compression standards and file formats developed by the ISO group.
12. JPEG (Joint Photographic Experts Group) - a lossy compression format for color images that reduces file size by 5% while losing some image detail.
13. SQL (Structured Query Language) - a standardized query language for requesting information from a database.
14. RPC (Remote Procedure Call) - allows a program on one computer execute a program on a server.
15. TCP (Transmission Control Protocol) - enables two to establish a connection and exchange streams of data.
16. UDP (User Datagram Protocol) - offering a direct way to send and receive datagrams over an IP network with very few error recovery services.
17. IP (Internet Protocol) - specifies the format of packets and the addressing schemes.
18. ICMP (Internet Control Message Protocol) - an extension of IP which supports packets containing error, control, and informational messages.
19. ARP (Address Resolution Protocol) - used to convert an IP address to a physical address.
20. PING - a utility to check if an IP address is accessible.
21. Traceroute - utility that tracks a packet from your computer to an internet host showing how many hops and how long it took.
22. IEEE 802.2 - divides the data link layer into two sublayers -- the logical link control (LLC) layer and the media access control (MAC) layer.
23. 802.3 - Defines the MAC layer for bus networks that use CSMA/CD. This is the basis of the Ethernet standard.
24. 802.5 - Defines the MAC layer for token-ring networks.


VLAN

You can associate physical interfaces on the BIG-IP system directly with VLANs. In this way, you can associate multiple interfaces with a single VLAN, or you can associate a single interface with multiple VLANs.
You do not need physical routers to establish communication between separate VLANs. Instead, the BIG-IP system can process messages between VLANs.

You can incorporate a BIG-IP system into existing, multi-vendor switched environments, due to the BIG-IP systems compliance with the IEEE 802.1q VLAN standard.
You can combine two or more VLANs into an object known as a VLAN group. With a VLAN group, a host in one VLAN can communicate with a host in another VLAN using a combination of Layer 2 forwarding and IP routing. This offers both performance and reliability benefits.
, you assigned the following to each of these VLANs:

When sending a request to a destination server, the BIG-IP system can use these self IP addresses to determine the specific VLAN that contains the destination server.
Specifies the VLAN ID. If you do not specify a VLAN ID, the BIG-IP system assigns an ID automatically. The value of a VLAN tag can be between 1 and 4094.
Causes the BIG-IP system to verify that the return path of an initial packet is through the same VLAN from which the packet originated.
A VLAN tag is a unique ID number that you assign to a VLAN. If you do not explicitly assign a tag to a VLAN, the BIG-IP system assigns a tag automatically. The value of a VLAN tag can be between 1 and 4094. Once you or the BIG-IP assigns a tag to a VLAN, any message sent from a host in that VLAN includes this VLAN tag as a header in the message.
The MAC address of a VLAN is the same MAC address of the lowest-numbered interface assigned to that VLAN.
The BIG-IP system supports two methods for sending and receiving messages through an interface that is a member of one or more VLANs. These two methods are port-based access to VLANs and tag-based access to VLANs. The method used by a VLAN is determined by the way that you add a member interface to a VLAN.
Port-based access to VLANs occurs when you add an interface to a VLAN as an untagged interface. In this case, the VLAN is the only VLAN that you can associate with that interface. This limits the interface to accepting traffic only from that VLAN, instead of from multiple VLANs. If you want to give an interface the ability to accept and receive traffic for multiple VLANs, you add the same interface to each VLAN as a tagged interface. The following section describes tagged interfaces.
With tag-based access to VLANs, the BIG-IP system accepts frames for a VLAN because the frames have tags in their headers and the tag matches the VLAN identification number for the VLAN. An interface that accepts frames containing VLAN tags is a tagged member of the VLAN. Frames sent out through tagged interfaces contain a tag in their header.
Tag-based access to VLANs occurs when you add an interface to a VLAN as a tagged interface. You can add the same tagged interface to multiple VLANs, thereby allowing the interface to accept traffic from each VLAN with which the interface is associated.
When you add an interface to a VLAN as a tagged interface, the BIG-IP system associates the interface with the VLAN identification number, or tag, which becomes embedded in a header of a frame.
Note: Every VLAN has a tag. You can assign the tag explicitly when creating the VLAN, or the BIG-IP system assigns it automatically if you do not supply one. F

When you enable the Source Check setting, the BIG-IP system verifies that the return path for an initial packet is through the same VLAN from which the packet originated. The system performs this verification only if you check the Source Check box for the VLAN, and if the global setting Auto Last Hop is not enabled. For information on the Auto Last Hop 


Specifies, when checked (enabled), that the system automatically maps the last hop for pools.

The value of the maximum transmission unit, or MTU, is the largest size that the BIG-IP system allows for an IP datagram passing through a BIG-IP system interface. The default value is 1500.
Layer 2 forwarding is the means by which frames are exchanged directly between hosts, with no IP routing required. This is accomplished using a simple forwarding table for each VLAN. The L2 forwarding table is a list that shows, for each host in the VLAN, the MAC address of the host, along with the interface that the BIG-IP system needs for sending frames to that host. The intent of the L2 forwarding table is to help the BIG-IP system determine the correct interface for sending frames, when the system determines that no routing is required.
VLAN groups reside in administrative partitions. To create a VLAN group, you must first set the current partition to the partition in which you want the VLAN group to reside.
the BIG-IP system assigns an ID automatically. The value of a VLAN group ID can be between 1 and 4094.
1.
On the Main tab of the navigation pane, expand Network and click VLANs.
This displays a list of all existing VLANs.
2.
On the menu bar, from VLAN Groups, choose List.
This displays a list of all existing VLAN groups.

3.
In the upper-right corner, click Create.
The VLAN Groups screen opens.
After you create a VLAN or a VLAN group, you must associate it with a self IP address. You associate a VLAN or VLAN group with a self IP address using the New Self IPs screens of the Configuration utility:
Description: http://support.f5.com/dam/f5/corp/global/images/assets/bullet.gif
Associating a VLAN with a self IP address
The self IP address with which you associate a VLAN should represent an address space that includes the IP addresses of the hosts that the VLAN contains. For example, if the address of one host is 11.0.0.1 and the address of the other host is 11.0.0.2, you could associate the VLAN with a self IP address of 11.0.0.100, with a netmask of 255.255.255.0.
Description: http://support.f5.com/dam/f5/corp/global/images/assets/bullet.gif
Associating a VLAN group with a self IP address
The self IP address with which you associate a VLAN group should represent an address space that includes the self IP addresses of the VLANs that you assigned to the group. For example, if the address of one VLAN is 10.0.0.1 and the address of the other VLAN is 10.0.0.2, you could associate the VLAN group with a self IP address of 10.0.0.100, with a netmask of 255.255.255.0.

You can assign VLANs (and VLAN groups) to route domain objects that you create. Traffic pertaining to that route domain uses those assigned VLANs,
During BIG-IP system installation, the system automatically creates a default route domain, with an ID of 0. Route domain 0 has two VLANs assigned to it, VLAN internal and VLAN external.

If you create one or more VLANs in an administrative partition other than Common, but do not create a route domain in that partition, then the VLANs you create in that partition are automatically assigned to route domain 0.

 Link aggregation is the process of combining multiple links so that the links function as a single link with higher bandwidth. Link aggregation occurs when you create a trunk. A trunk is a combination of two or more interfaces and cables configured as one link.

netA: requires a /28 (255.255.255.240) mask to support 14 hosts
netB: requires a /27 (255.255.255.224) mask to support 28 hosts
netC: requires a /30 (255.255.255.252) mask to support 2 hosts
netD*: requires a /28 (255.255.255.240) mask to support 7 hosts
netE: requires a /27 (255.255.255.224) mask to support 28 hosts

* a /29 (255.255.255.248) would only allow 6 usable host addresses
  therefore netD requires a /28 mask.
How many subnets and hosts per subnet can you get from the network 172.27.0.0/25?
Answer: 512 subnets and 126 hosts

IP Fragmentation
Internet Protocol (IP) version 4.0 Fragmentation and Reassembly
The following is an explanation of the IP Fragmentation and Reassembly process used by IP version 4.0. It will examine the purpose of IP Fragmentation, the relevant fields contained within the IP Header and the role of Maximum Transmission Unit (MTU) in determining when IP Fragmentation will be used.
As specified in RFC 791 (Internet Protocol - DARPA Internet Program Protocol Specification, Sept. 1981), the IP Fragmentation and Reassembly process occurs at the IP layer and is transparent to the Upper Layer Protocols (ULP). As a block of data is prepared for transmission, the sending or forwarding device examines the MTU for the network the data is to be sent or forwarded across. If the size of the block of data is less then the MTU for that Network, the data is transmitted in accordance with the rules for that particular network. But what happens when the amount of data is greater than the MTU for the network? It is at this point that one of the functions of the IP Layer, commonly referred to as Fragmentation and Reassembly, will come into play.
Maximum Transmission Unit (MTU)
There are a number of deferring network transmission architectures, with each having a physical limit of the number of data bytes that may be contained within a given frame. This physical limit is described in numerous specifications and is referred to as the Maximum Transmission Unit or MTU of the network. An example of such an MTU would be IEEE 802.3 Ethernet; according to the specifications, the maximum number of data bytes that can be contained within a frame is 1500. The following table lists the MTU of several common network types (from RFC 1191 - MTU Path Discovery, Nov 1990):
Network Architecture
MTU in Bytes
802.3 Ethernet
1500 B
Sequence examples of IP Fragmentation and IP Fragmentation Reassembly
IP Fragmentation
Regardless of what situation occurs that requires IP Fragmentation, the procedure followed by the device performing the fragmentation must be as follows:
1.     The device attempting to transmit the block of data will first examine the Flag field to see if the field is set to the value of (x0x or x1x). If the value is equal to (x1x) this indicates that the data may not be fragmented, forcing the transmitting device to discard that data. Depending on the specific configuration of the device, an Internet Control Message Protocol (ICMP) Destination Unreachable -> Fragmentation required and Do Not Fragment Bit Set message may be generated.
2.     Assuming the flag field is set to (x0x), the device computes the number of fragments required to transmit the amount of data in by dividing the amount of data by the MTU. This will result in "X" number of frames with all but the final frame being equal to the MTU for that network.
3.     It will then create the required number of IP packets and copies the IP header into each of these packets so that each packet will have the same identifying information, including the Identification Field.
4.     The Flag field in the first packet, and all subsequent packets except the final packet, will be set to "More Fragments." The final packets Flag Field will instead be set to "Last Fragment."
5.     The Fragment Offset will be set for each packet to record the relative position of the data contained within that packet.
6.     The packets will then be transmitted according to the rules for that network architecture.
IP Fragment Reassembly
If a receiving device detects that IP Fragmentation has been employed, the procedure followed by the device performing the Reassembly must be as follows:
1.     The device receiving the data detects the Flag Field set to "More Fragments."
2.     It will then examine all incoming packets for the same Identification number contained in the packet.
3.     It will store all of these identified fragments in a buffer in the sequence specified by the Fragment Offset Field.
4.     Once the final fragment, as indicated by the Flag Field, is set to "Last Fragment," the device will attempt to reassemble that data in offset order.
5.     If reassembly is successful, the packet is then sent to the ULP in accordance with the rules for that device.
6.     If reassembly is unsuccessful, perhaps due to one or more lost fragments, the device will eventually time out and all of the fragments will be discarded.
7.     The transmitting device will than have to attempt to retransmit the data in accordance with its own procedures.
Security and IP Fragments
The IP version 4 Fragmentation and Reassembly process suffers from a particular weakness that can be utilized to trigger a Denial of Service Attack (DOS). The receiving device will attempt reassembly following receipt of a frame containing a Flag field set to (xx1), indicating more fragments to follow. Recall that receipt of such a frame causes the receiving device to allocate buffer resources for reassembly.
So what happens if a device is flooded with separate frames, each with the Flag field set to (xx1), but each has the Identification Field set to a different value? According to the rules for IP version 4 Fragmentation and Reassembly, the device would attempt to allocate resources to each separate fragment in preparation for reassembly. However, given a flood of such fragments, the receiving device would quickly exhaust its available resources while waiting for buffer time-outs to occur. The result, of course, would be that possible valid fragments would be lost or encounter insufficient resources to support reassembly. The common term for this type of artificially induced shortage of resources is "Denial of Service Attack".
To defend against just such DOS attempts, many network security features now include specific rules implemented at the Firewall that change the time-out value for how long they will hold incoming fragments before discarding them.

MAXIMUM SEGMENT SIZE (MSS)

The Maximum Segment Size is used to define the maximum segment that will be used during a connection between two hosts. As such, you should only see this option used during the SYN and SYN/ACK phase of the 3-way-handshake. The MSS TCP Option occupies 4 bytes (32 bits) of length.
If you have previously come across the term "MTU" which stands for Maximum Transfer Unit, you will be pleased to know that the MSS helps define the MTU used on the network.
If your scratching your head because the MSS and MTU field doesn't make any sense to you, or it is not quite clear, don't worry, the following diagram will help you get the big picture:

WINDOW SCALING

We briefly mentioned Window Scaling in the previous section of the TCP analysis, though you will soon discover that this topic is quite broad and requires a great deal of attention.
After gaining a sound understanding of what the Window size flag is used for, Window Scaling is, in essence, an extention to the Window size flag. Because the largest possible value in the Window size flag is only 65,535 bytes (64 kb), it was clear that a larger field was required in order to increase the value to a whopping 1 Gig! Thus, Window Scaling was born.
The Window Scaling option can be a maximum of 30 bits in size, which includes the original 16 bit Window size field covered in the previous section. So that's 16 (original window field) + 14 (TCP Options 'Window Scaling') = 30 bits in total.
If you're wondering where on earth would someone use such an extremely large Window size, think again. Window Scaling was created for high-latency, high-bandwidth WAN links where a limited Window size can cause severe performance problems.

SELECTIVE ACKNOWLEDGMENTS (SACK)

TCP has been designed to be a fairly robust protocol though, despite this, it still has several disadvantages, one of which concerns Acknowledgements, which also happens to be the reason Selective Acknowledgement were introduced with RFC 1072.
The problem with the good old plain Acknowledgements is that there are no mechanisms for a receiver to state "I'm still waiting for bytes 20 through 25, but have received bytes 30 through 35". And if your wondering whether this is possible, then the answer is 'yes' it is!

TIMESTAMPS

Another aspect of TCP's flow-control and reliability services is the round-trip delivery times that a virtual circuit is experiencing. The round-trip delivery time will accurately determine how long TCP will wait before attempting to retransmit a segment that has not been acknowledged.

NOP

The nop TCP Option means "No Option" and is used to separate the different options used within the TCP Option field. The implementation of the nop field depends on the operating system used. For example, if options MSS and SACK are used, Windows XP will usually place two nop's between them, as was indicated in the first picture on this page.

FLOW CONTROL

Flow control is used to control the data flow between the connection. If for any reason one of the two hosts are unable to keep up with the data transfer, it is able to send special signals to the other end, asking it to either stop or slow down so it can keep up.
For example, if Host B was a webserver from which people could download games, then obviously Host A is not going to be the only computer downloading from this webserver, so Host B must regulate the data flow to every computer downloading from it. This means it might turn to Host A and tell it to wait for a while until more resources are available because it has another 20 users trying to download at the same time.

WINDOWING

Data throughput, or transfer efficiency, would be low if the transmitting machine had to wait for an acknowledgment after sending each packet of data (the correct term is segment as we will see on the next page). Because there is time available after the sender transmits the data segment and before it finishes processing acknowledgments from the receiving machine, the sender uses the break to transmit more data. If we wanted to briefly define Windowing we could do so by stating that it is the number of data segments the transmitting machine is allowed to send without receiving an acknowledgment for them.

Windowing controls how much information is transferred from one end to the other. While some protocols quantify information by observing the number of packets, TCP/IP measures it by counting the number of bytes.




Standard virtual server
The BIG-IP LTM TMOS operating system implements a full proxy architecture for virtual servers configured with a TCP profile. By assigning a custom TCP profile to the virtual server, you can configure the BIG-IP LTM system to maintain compatibility to disparate server operating systems in the data center. At the same time, the BIG-IP LTM system can leverage its TCP/IP stack on the client side of the connection to provide independent and optimized TCP connections to client systems.
In a full proxy architecture, the BIG-IP LTM system appears as a TCP peer to both the client and the server by associating two independent TCP connections with the end-to-end session. Although certain client information, such as the source IP address or source TCP port, may be re-used on the server side of the connection, the BIG-IP LTM system manages the two sessions independently, making itself transparent to the client and server.
The Standard virtual server requires a TCP or UDP profile, and may optionally be configured with HTTP, FTP, or SSL profiles if Layer 7 or SSL processing is required.
The TCP connection setup behavior for a Standard virtual server varies depending on whether a TCP profile or a TCP and Layer 7 profile, such as HTTP, is associated with the virtual server.
Standard virtual server with a TCP profile
The TCP connection setup behavior for a Standard virtual server operates as follows: the three-way TCP handshake occurs on the client side of the connection before the BIG-IP LTM system initiates the TCP handshake on the server side of the connection.
A Standard virtual server processes connections using the full proxy architecture. The following TCP flow diagram illustrates the TCP handshake for a Standard virtual server with a TCP profile:

Standard virtual server with Layer 7 functionality
If a Standard virtual server is configured with Layer 7 functionality, such as an HTTP profile, the client must send at least one data packet before the server-side connection can be initiated by the BIG-IP LTM system.
Note: The BIG-IP LTM system may initiate the server-side connection prior to the first data packet for certain Layer 7 applications, such as FTP, in which case the user waits for a greeting banner before sending any data.
The TCP connection setup behavior for a Standard virtual server with Layer 7 functionality operates as follows: the three-way TCP handshake and initial data packet are processed on the client side of the connection before the BIG-IP LTM system initiates the TCP handshake on the server side of the connection.
A Standard virtual server with Layer 7 functionality processes connections using the full proxy architecture. The following TCP flow diagram illustrates the TCP handshake for a Standard virtual server with Layer 7 functionality:

Performance Layer4 virtual server
The Performance Layer4 virtual server type uses the Fast L4 profile. Depending on the configuration, the virtual server uses the PVA ASIC chip with the PVA Acceleration mode defined as one of the following: fullassisted, or none. Irrespective of the PVA acceleration mode used in the profile, the Performance Layer4 virtual server processes connections on a packet-by-packet basis.
.
The Performance Layer4 virtual server packet-by-packet TCP behavior operates as follows: The initial SYN request is sent from the client to the BIG-IP LTM virtual server. The BIG-IP LTM system makes the load balancing decision and passes the SYN request to the pool member.
Performance HTTP virtual server
The Performance HTTP virtual server type uses the Fast HTTP profile. The Performance HTTP virtual server with the Fast HTTP profile is designed to speed up certain types of HTTP connections and reduce the number of connections opened to the back-end HTTP servers. This is accomplished by combining features from the TCP, HTTP, and OneConnect profiles into a single profile that is optimized for network performance. The Performance HTTP virtual server processes connections on a packet-by-packet basis and buffers only enough data to parse packet headers.
The Performance HTTP virtual server TCP behavior operates as follows: The BIG-IP system establishes server-side flows by opening TCP connections to the pool members. When a client makes a connection to the Performance HTTP virtual server, if an existing server-side flow to the pool member is idle, the BIG-IP LTM system marks the connection as non-idle and sends a client request over the connection.
Performance HTTP virtual server with idle server-side flow  

Forwarding Layer 2 virtual server
The Forwarding Layer 2 virtual server type uses the Fast L4 profile. The Forwarding Layer 2 virtual server forwards packets based on the destination Layer 2 Media Access Control (MAC) address, and therefore does not have pool members to load balance. The virtual server shares the same IP address as a node in an associated VLAN. Before creating a Forwarding Layer 2 virtual server, you must define a VLAN group that includes the VLAN in which the node resides. The Forwarding Layer 2 virtual server processes connections on a packet-by-packet basis.
The Forwarding Layer 2 virtual server operates on a packet-by-packet basis with the following TCP behavior: the initial SYN request is sent from the client to the BIG-IP LTM virtual server. The BIG-IP LTM passes the SYN request to the node in the associated VLAN based on the destination MAC address.
TCP includes a 16-bit Checksum field in its header

TCP MAIN FEATURES

Here are the main features of the TCP that we are going to analyse:
·       Reliable Transport
·       Connection-Oriented
·       Flow Control
·       Windowing
·       Acknowledgements
·       More overhead

3 way shake
·       STEP 1: Host A sends the initial packet to Host B. This packet has the "SYN" bit enabled. Host B receives the packet and sees the "SYN" bit which has a value of "1" (in binary, this means ON) so it knows that Host A is trying to establish a connection with it.
·       STEP 2: Assuming Host B has enough resources, it sends a packet back to Host A and with the "SYN and ACK" bits enabled (1). The SYN that Host B sends, at this step, means 'I want to synchronise with you' and the ACK means 'I acknowledge your previous SYN request'.
·       STEP 3: So... after all that, Host A sends another packet to Host B and with the "ACK" bit set (1), it effectively tells Host B 'Yes, I acknowledge your previous request'.
·       Once the 3-way handshake is complete, the connection is established (virtual circuit) and the data transfer begins.
HTTP   PROTOCOL
The HTTP protocol is a request/response protocol. A client sends a request to the server in the form of a request method, URI, and protocol version, followed by a MIME-like message containing request modifiers, client information, and possible body content over a connection with a server. The server responds with a status line, including the message's protocol version and a success or error code, followed by a MIME-like message containing server information, entity metainformation, and possible entity-body content. The relationship between HTTP and MIME is described in appendix 19.4.
Most HTTP communication is initiated by a user agent and consists of a request to be applied to a resource on some origin server. In the simplest case, this may be accomplished via a single connection (v) between the user agent (UA) and the origin server (O).
          request chain ------------------------>
       UA -------------------v------------------- O
          <----------------------- chain="" o:p="" response="">
A more complicated situation occurs when one or more intermediaries are present in the request/response chain. There are three common forms of intermediary: proxy, gateway, and tunnel. A proxy is a forwarding agent, receiving requests for a URI in its absolute form, rewriting all or part of the message, and forwarding the reformatted request toward the server identified by the URI. A gateway is a receiving agent, acting as a layer above some other server(s) and, if necessary, translating the requests to the underlying server's protocol. A tunnel acts as a relay point between two connections without changing the messages; tunnels are used when the communication needs to pass through an intermediary (such as a firewall) even when the intermediary cannot understand the contents of the messages.
          request chain -------------------------------------->
       UA -----v----- A -----v----- B -----v----- C -----v----- O
          <------------------------------------- chain="" o:p="" response="">
The figure above shows three intermediaries (A, B, and C) between the user agent and origin server. A request or response message that travels the whole chain will pass through four separate connections. This distinction is important because some HTTP communication options
Any party to the communication which is not acting as a tunnel may employ an internal cache for handling requests. The effect of a cache is that the request/response chain is shortened if one of the participants along the chain has a cached response applicable to that request. The following illustrates the resulting chain if B has a cached copy of an earlier response from O (via C) for a request which has not been cached by UA or A.
          request chain ---------->
       UA -----v----- A -----v----- B - - - - - - C - - - - - - O
          <--------- chain="" o:p="" response="">
Not all responses are usefully cacheable, and some requests may contain modifiers which place special requirements on cache behavior. HTTP requirements for cache behavior and cacheable responses are defined in section 13.
HTTP communication usually takes place over TCP/IP connections. The default port is TCP 80 [19], but other ports can be used. This does not preclude HTTP from being implemented on top of any other protocol on the Internet, or on other networks. HTTP only presumes a reliable transport; any protocol that provides such guarantees can be used; the mapping of the HTTP/1.1 request and response structures onto the transport data units of the protocol in question is outside the scope of this specification.

List of Common HTTP Status Codes

  1. 200 OK
  2. 300 Multiple Choices
  3. 301 Moved Permanently
  4. 302 Found
  5. 304 Not Modified
  6. 307 Temporary Redirect
  7. 400 Bad Request
  8. 401 Unauthorized
  9. 403 Forbidden
  10. 404 Not Found
  11. 410 Gone
  12. 500 Internal Server Error
  13. 501 Not Implemented
  14. 503 Service Unavailable
  15. 550 Permission denied

HTTP Status Code - 400 Bad Request

The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications.

HTTP Status Code - 401 Unauthorized

The request requires user authentication. The response MUST include a WWW-Authenticate header field containing a challenge applicable to the requested resource.

HTTP Status Code - 403 Forbidden

The server understood the request, but is refusing to fulfill it. Authorization will not help and the request SHOULD NOT be repeated.

HTTP Status Code - 404 Not Found

The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent.

HTTP Status Code - 410 Gone

The requested resource is no longer available at the server and no forwarding address is known. This condition is expected to be considered permanent. Clients with link editing capabilities SHOULD delete references to the Request-URI after user approval.
If the server does not know, or has no facility to determine, whether or not the condition is permanent, the status code 404 Not Found SHOULD be used instead. This response is cacheable unless indicated otherwise.

HTTP Status Code - 500 Internal Server Error

The server encountered an unexpected condition which prevented it from fulfilling the request.

HTTP Status Code - 501 Not Implemented

The server does not support the functionality required to fulfill the request. This is the appropriate response when the server does not recognize the request method and is not capable of supporting it for any resource.

HTTP Status Code - 503 Service Unavailable

Your web server is unable to handle your HTTP request at the time. There are a myriad of reasons why this can occur but the most common are:
  • server crash
  • server maintenance
  • server overload
  • server maliciously being attacked
  • a website has used up its allotted bandwidth
  • server may be forbidden to return the requested document
·       Proxy support and the Host field:
·       HTTP 1.1 has a required Host header by spec.
·       HTTP 1.0 does not officially require a Host header, but it doesn't hurt to add one, and many applications (proxies) expect to see the Host header regardless of the protocol version.
·       Example:
·       GET / HTTP/1.1
·       Host: www.blahblahblahblah.com
·       This header is useful because it allows you to route a message through proxy servers, and also because your a web server can distinguish between different sites on the same server.
·       So this means if you have blahblahlbah.com and helohelohelo.com both pointing to the same IP. Your web server can use the Host field to distinguish which site the client machine wants.
·       Persistent connections:
·       HTTP 1.1 also allows you to have persistent connections which means that you can have more than one request/response on the same HTTP connection.
·       In HTTP 1.0 you had to open a new connection for each request/response pair. And after each response the connection would be closed. This lead to some big efficiency problems because of TCP Slow Start.
·       OPTIONS method:
·       HTTP/1.1 introduces the OPTIONS method. An HTTP client can use this method to determine the abilities of the HTTP server. Although it is not very much used today, most of this information is passed on server responses.
·       Caching:
·       HTTP 1.0 had support for caching via the header: If-Modified-Since.
·       HTTP 1.1 expands on the caching support a lot by using something called 'entity tag'. If 2 resources are the same, then they will have the same entity tags.
·       HTTP 1.1 also adds the If-Unmodified-Since, If-Match, If-None-Match conditional headers.
·       There are also further additions relating to caching like the Cache-Control header.
·       100 Continue status:
·       There is a new return code in HTTP/1.1 100 Continue. This is to prevent a client from sending a large request when that client is not even sure if the server can process the request, or is authorized to process the request. In this case the client sends only the headers, and the server will tell the client 100 Continue, go ahead with the body.

HEAD

The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response. The metainformation contained in the HTTP headers in response to a HEAD request SHOULD be identical to the information sent in response to a GET request. This method can be used for obtaining metainformation about the entity implied by the request without transferring the entity-body itself. This method is often used for testing hypertext links for validity, accessibility, and recent modification.
The response to a HEAD request MAY be cacheable in the sense that the information contained in the response MAY be used to update a previously cached entity from that resource. If the new field values indicate that the cached entity differs from the current entity (as would be indicated by a change in Content-Length, Content-MD5, ETag or Last-Modified), then the cache MUST treat the cache entry as stale.

POST

The POST method is used to request that the origin server accept the entity enclosed in the request as a new subordinate of the resource identified by the Request-URI in the Request-Line.
The actual function performed by the POST method is determined by the server and is usually dependent on the Request-URI. The posted entity is subordinate to that URI in the same way that a file is subordinate to a directory containing it, a news article is subordinate to a newsgroup to which it is posted, or a record is subordinate to a database.
The action performed by the POST method might not result in a resource that can be identified by a URI. In this case, either 200 (OK) or 204 (No Content) is the appropriate response status, depending on whether or not the response includes an entity that describes the result.
If a resource has been created on the origin server, the response SHOULD be 201 (Created) and contain an entity which describes the status of the request and refers to the new resource, and a Location header
Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields. However, response can be used to direct the user agent to retrieve a cacheable resource.

PUT

The PUT method requests that the enclosed entity be stored under the supplied Request-URI. If the Request-URI refers to an already existing resource, the enclosed entity SHOULD be considered as a modified version of the one residing on the origin server. If the Request-URI does not point to an existing resource, and that URI is capable of being defined as a new resource by the requesting user agent, the origin server can create the resource with that URI. If a new resource is created, the origin server MUST inform the user agent via the 201 (Created) response. If an existing resource is modified, either the 200 (OK) or 204 (No Content) response codes SHOULD be sent to indicate successful completion of the request. If the resource could not be created or modified with the Request-URI, an appropriate error response SHOULD be given that reflects the nature of the problem. The recipient of the entity MUST NOT ignore any Content-* (e.g. Content-Range) headers that it does not understand or implement and MUST return a 501 (Not Implemented) response in such cases.
If the request passes through a cache and the Request-URI identifies one or more currently cached entities, those entries SHOULD be treated as stale. Responses to this method are not cacheable.
The fundamental difference between the POST and PUT requests is reflected in the different meaning of the Request-URI. The URI in a POST request identifies the resource that will handle the enclosed entity. That resource might be a data-accepting process, a gateway to some other protocol, or a separate entity that accepts annotations. In contrast, the URI in a PUT request identifies the entity enclosed with the request -- the user agent knows what URI is intended and the server MUST NOT attempt to apply the request to some other resource. If the server desires that the request be applied to a different URI,
it MUST send a 301 (Moved Permanently) response; the user agent MAY then make its own decision regarding whether or not to redirect the request.
A single resource MAY be identified by many different URIs. For example, an article might have a URI for identifying "the current version" which is separate from the URI identifying each particular version. In this case, a PUT request on a general URI might result in several other URIs being defined by the origin server.
HTTP/1.1 does not define how a PUT method affects the state of an origin server.
PUT requests MUST obey the message transmission requirements set out in section 8.2.
Unless otherwise specified for a particular entity-header, the entity-headers in the PUT request SHOULD be applied to the resource created or modified by the PUT.

DELETE

The DELETE method requests that the origin server delete the resource identified by the Request-URI. This method MAY be overridden by human intervention (or other means) on the origin server. The client cannot be guaranteed that the operation has been carried out, even if the status code returned from the origin server indicates that the action has been completed successfully. However, the server SHOULD NOT indicate success unless, at the time the response is given, it intends to delete the resource or move it to an inaccessible location.
A successful response SHOULD be 200 (OK) if the response includes an entity describing the status, 202 (Accepted) if the action has not yet been enacted, or 204 (No Content) if the action has been enacted but the response does not include an entity.

TRACE

The TRACE method is used to invoke a remote, application-layer loop- back of the request message. The final recipient of the request SHOULD reflect the message received back to the client as the entity-body of a 200 (OK) response. The final recipient is either the
TRACE allows the client to see what is being received at the other end of the request chain and use that data for testing or diagnostic information. The value of the Via header field is of particular interest, since it acts as a trace of the request chain. Use of the Max-Forwards header field allows the client to limit the length of the request chain, which is useful for testing a chain of proxies forwarding messages in an infinite loop.
If the request is valid, the response SHOULD contain the entire request message in the entity-body, with a Content-Type of "message/http". Responses to this method MUST NOT be cached.

CONNECT

This specification reserves the method name CONNECT for use with a proxy that can dynamically switch to being a tunnel

HTTP 1.1

In HTTP 1.1, all connections are considered persistent unless declared otherwise.The HTTP persistent connections do not use separate keepalive messages, they just allow multiple requests to use a single connection. However, the default connection timeout of Apache 2.0 httpd[2] is as little as 15 seconds[3] and for Apache 2.2 only 5 seconds.[4] The advantage of a short timeout is the ability to deliver multiple components of a web page quickly while not consuming resources to run multiple server processes or threads for too long.[5]

What is a HTTP Header?

HTTP Headers are the request to a server for information and the resulting response. When you input an address into your browser it sends a request to the server hosting the domain and the server responds. You can see the request and response using our basicHTTP Header Viewer. The HTTP Header Viewer can be used to view the Headers or Headers and Content of any valid http:// url. When HEAD is selected the request is for the server to only send header information. A GET selection requests both headers and file content just like a browser request. Information in response headers may include
·       Response status; 200 is a valid response from the server.
·       Date of request.
·       Server details; type, configuration and version numbers. For example the php version.
·       Cookies; cookies set on your system for the domain.
·       Last-Modified; this is only available if set on the server and is usually the time the requested file was last modified

·       Content-Type; text/html is a html web page, text/xml an xml file.


DNS

The chain of events to get the IP address for www.abc.com:
First your computer queries the name server (DNS server) it is set up to use. This is the recursive name server shown above.
The name server doesn’t know the IP address for www.abc.com, so it will start the following chain of queries before it can report back the IP address to your computer (the numbers below correspond to the numbers in the image).
1.    Query the Internet root servers to get the name servers for the .com TLD.
2.    Query the .com TLD name servers to get the authoritative name servers for abc.com.
When a DNS client needs to look up a name used in a program, it queries DNS servers to resolve the name. Each query message the client sends contains three pieces of information, specifying a question for the server to answer:
  • A specified DNS domain name, stated as a fully qualified domain name (FQDN)
  • A specified query type, which can either specify a resource record by type or a specialized type of query operation
  • A specified class for the DNS domain name. 

    For Windows DNS servers, this should always be specified as the Internet (IN) class.
For example, the name specified could be the FQDN for a computer, such as "host-a.example.microsoft.com.", and the query type specified to look for an address (A) resource record by that name. Think of a DNS query as a client asking a server a two-part question, such as "Do you have any A resource records for a computer named 'hostname.example.microsoft.com.'?" When the client receives an answer from the server, it reads and interprets the answered A resource record, learning the IP address for the computer it asked for by name.
DNS queries resolve in a number of different ways. A client can sometimes answer a query locally using cached information obtained from a previous query. The DNS server can use its own cache of resource record information to answer a query. A DNS server can also query or contact other DNS servers on behalf of the requesting client to fully resolve the name, then send an answer back to the client. This process is known as recursion.
In addition, the client itself can attempt to contact additional DNS servers to resolve a name. When a client does so, it uses separate and additional nonrecursive queries based on referral answers from servers. This process is known as iteration.
In general, the DNS query process occurs in two parts:
  • A name query begins at a client computer and is passed to a resolver, the DNS Client service, for resolution.
  • When the query cannot be resolved locally, DNS servers can be queried as needed to resolve the name.

3.    Query the authoritative name servers for abc.com to finally get the IP address for the host www.abc.com, then return that IP address to your computer.
4.    Done! Now that your computer has the IP address for www.abc.com, it can access that host.

iteration

Iteration is the type of name resolution used between DNS clients and servers when the following conditions are in effect:
  • The client requests the use of recursion, but recursion is disabled on the DNS server.
  • The client does not request the use of recursion when querying the DNS server.
An iterative request from a client tells the DNS server that the client expects the best answer the DNS server can provide immediately, without contacting other DNS servers.
When iteration is used, a DNS server answers a client based on its own specific knowledge about the namespace with regard to the names data being queried. For example, if a DNS server on your intranet receives a query from a local client for "www.microsoft.com", it might return an answer from its names cache. If the queried name is not currently stored in the names cache of the server, the server might respond by providing a referral -- that is, a list of NS and A resource records for other DNS servers that are closer to the name queried by the client.
When a referral is made, the DNS client assumes responsibility to continue making iterative queries to other configured DNS servers to resolve the name. For example, in the most involved case, the DNS client might expand its search as far as the root domain servers on the Internet in an effort to locate the DNS servers that are authoritative for the "com" domain. Once it contacts the Internet root servers, it can be given further iterative responses from these DNS servers that point to actual Internet DNS servers for the "microsoft.com" domain. When the client is provided records for these DNS servers, it can send another iterative query to the external Microsoft DNS servers on the Internet, which can respond with a definitive and authoritative answer.
When iteration is used, a DNS server can further assist in a name query resolution beyond giving its own best answer back to the client. For most iterative queries, a client uses its locally configured list of DNS servers to contact other name servers throughout the DNS namespace if its primary DNS server cannot resolve the query.

 caching

As DNS servers process client queries using recursion or iteration, they discover and acquire a significant store of information about the DNS namespace. This information is then cached by the server.
Caching provides a way to speed the performance of DNS resolution for subsequent queries of popular names, while substantially reducing DNS-related query traffic on the network.
As DNS servers make recursive queries on behalf of clients, they temporarily cache resource records (RRs). Cached RRs contain information obtained from DNS servers that are authoritative for DNS domain names learned while making iterative queries to search and fully answer a recursive query performed on behalf of a client. Later, when other clients place new queries that request RR information matching cached RRs, the DNS server can use the cached RR information to answer them.
When information is cached, a Time-To-Live (TTL) value applies to all cached RRs. As long as the TTL for a cached RR does not expire, a DNS server can continue to cache and use the RR again when answering queries by its clients that match these RRs. Caching TTL values used by RRs in most zone configurations are assigned the Minimum (default) TTL which is set used in the zone's start of authority (SOA) resource record. By default, the minimum TTL is 3,600 seconds (1 hour) but can be adjusted or, if needed, individual caching TTLs can be set at each RR.
Notes
  • You can install a DNS server as a caching-only server. For more information, see Using caching-only servers.
  • By default, DNS servers use a root hints file, Cache.dns, that is stored in the systemroot\System32\Dns folder on the server computer. The contents of this file are preloaded into server memory when the service is started and contain pointer information to root servers for the DNS namespace where you are operating DNS servers. For more information about this file or how it is used, see DNS-related files.

DNS   QUERY PROCESS
·       Step 1: Request information
·       The process begins when you ask your computer to resolve a hostname, such as visiting http://dyn.com. The first place your computer looks is its local DNS cache, which stores information that your computer has recently retrieved.
·       If your computer doesn’t already know the answer, it needs to perform a DNS query to find out.
·       Step 2: Ask the recursive DNS servers
·       If the information is not stored locally, your computer queries (contacts) your ISP’s recursive DNS servers. These specialized computers perform the legwork of a DNS query on your behalf. Recursive servers have their own caches, so the process usually ends here and the information is returned to the user.
·       Step 3: Ask the root nameservers
·       If the recursive servers don’t have the answer, they query the root nameservers. A nameserver is a computer that answers questions about domain names, such as IP addresses. The thirteen root nameservers act as a kind of telephone switchboard for DNS. They don’t know the answer, but they can direct our query to someone that knows where to find it.
·       Step 4: Ask the TLD nameservers
·       The root nameservers will look at the first part of our request, reading from right to left — www.dyn.com — and direct our query to the Top-Level Domain (TLD) nameservers for .com. Each TLD, such as .com, .org, and .us, have their own set of nameservers, which act like a receptionist for each TLD. These servers don’t have the information we need, but they can refer us directly to the servers that do have the information.
·       Step 5: Ask the authoritative DNS servers
·       The TLD nameservers review the next part of our request — www.dyn.com — and direct our query to the nameservers responsible for this specific domain. These authoritative nameservers are responsible for knowing all the information about a specific domain, which are stored in DNS records. There are many types of records, which each contain a different kind of information. In this example, we want to know the IP address for www.dyndns.com, so we ask the authoritative nameserver for the Address Record (A).
·       Step 6: Retrieve the record
·       The recursive server retrieves the A record for dyn.com from the authoritative nameservers and stores the record in its local cache. If anyone else requests the host record for dyn.com, the recursive servers will already have the answer and will not need to go through the lookup process again. All records have a time-to-live value, which is like an expiration date. After a while, the recursive server will need to ask for a new copy of the record to make sure the information doesn’t become out-of-date.
·       Step 7: Receive the answer
·       Armed with the answer, recursive server returns the A record back to your computer. Your computer stores the record in its cache, reads the IP address from the record, then passes this information to your browser. The browser then opens a connection to the webserver and receives the website.
·       This entire process, from start to finish, takes only milliseconds to complete.




FTP


Active and Passive Connection Mode

The FTP server may support Active or Passive connections, or both.  In an Active FTP connection, the client opens a port and listens and the server actively connects to it.  In a Passive FTP connection, the server opens a port and listens (passively) and the client connects to it.  You must grant Auto FTP Manager access to the Internet and to choose the right type of FTP Connection Mode.

Most FTP client programs select passive connection mode by default because server administrators prefer it as a safety measure.  Firewalls generally block connections that are "initiated" from the outside.  Using passive mode, the FTP client (like Auto FTP Manager) is "reaching out" to the server to make the connection.  The firewall will allow these outgoing connections, meaning that no special adjustments to firewall settings are required.

If you are connecting to the FTP server using Active mode of connection you must set your firewall to accept connections to the port that your FTP client will open.  However, many Internet service providers block incoming connections to all ports above 1024.  Active FTP servers generally use port 20 as their data port.

It's a good idea to use Passive mode to connect to an FTP server.  Most FTP servers support the Passive mode.  For Passive FTP connection to succeed, the FTP server administrator must set his / her firewall to accept all connections to any ports that the FTP server may open.  However, this is the server administrator's problem (and standard practice for servers).  You can go ahead, make and use FTP connections.



The FTP uses mainly 2 file transfer modes 

1.     Binary - The binary mode transmits all eight bits per byte thus have much more transfer rate and reduces the chance of transmission error
2.     ASCII - This is the default transfer mode and transmits 7 bits per byte..



Active FTP vs. Passive FTP, a Definitive Explanation


One of the most commonly seen questions when dealing with firewalls and other Internet connectivity issues is the difference between active and passive FTP and how best to support either or both of them. Hopefully the following text will help to clear up some of the confusion over how to support FTP in a firewalled environment.
This may not be the definitive explanation, as the title claims, however, I've heard enough good feedback and seen this document linked in enough places to know that quite a few people have found it to be useful. I am always looking for ways to improve things though, and if you find something that is not quite clear or needs more explanation, please let me know! Recent additions to this document include the examples of both active and passive command line FTP sessions. These session examples should help make things a bit clearer. They also provide a nice picture into what goes on behind the scenes during an FTP session. Now, on to the information...

FTP is a TCP based service exclusively. There is no UDP component to FTP. FTP is an unusual service in that it utilizes two ports, a 'data' port and a 'command' port (also known as the control port). Traditionally these are port 21 for the command port and port 20 for the data port. The confusion begins however, when we find that depending on the mode, the data port is not always on port 20.

Active FTP

In active mode FTP the client connects from a random unprivileged port (N > 1023) to the FTP server's command port, port 21. Then, the client starts listening to port N+1 and sends the FTP command PORT N+1 to the FTP server. The server will then connect back to the client's specified data port from its local data port, which is port 20.
From the server-side firewall's standpoint, to support active mode FTP the following communication channels need to be opened:
  • FTP server's port 21 from anywhere (Client initiates connection)
  • FTP server's port 21 to ports > 1023 (Server responds to client's control port)
  • FTP server's port 20 to ports > 1023 (Server initiates data connection to client's data port)
  • FTP server's port 20 from ports > 1023 (Client sends ACKs to server's data port)


Active FTP vs. Passive FTP, a Definitive Explanation


Introduction

One of the most commonly seen questions when dealing with firewalls and other Internet connectivity issues is the difference between active and passive FTP and how best to support either or both of them. Hopefully the following text will help to clear up some of the confusion over how to support FTP in a firewalled environment.
This may not be the definitive explanation, as the title claims, however, I've heard enough good feedback and seen this document linked in enough places to know that quite a few people have found it to be useful. I am always looking for ways to improve things though, and if you find something that is not quite clear or needs more explanation, please let me know! Recent additions to this document include the examples of both active and passive command line FTP sessions. These session examples should help make things a bit clearer. They also provide a nice picture into what goes on behind the scenes during an FTP session. Now, on to the information...

The Basics

FTP is a TCP based service exclusively. There is no UDP component to FTP. FTP is an unusual service in that it utilizes two ports, a 'data' port and a 'command' port (also known as the control port). Traditionally these are port 21 for the command port and port 20 for the data port. The confusion begins however, when we find that depending on the mode, the data port is not always on port 20.

Active FTP

In active mode FTP the client connects from a random unprivileged port (N > 1023) to the FTP server's command port, port 21. Then, the client starts listening to port N+1 and sends the FTP command PORT N+1 to the FTP server. The server will then connect back to the client's specified data port from its local data port, which is port 20.
From the server-side firewall's standpoint, to support active mode FTP the following communication channels need to be opened:
  • FTP server's port 21 from anywhere (Client initiates connection)
  • FTP server's port 21 to ports > 1023 (Server responds to client's control port)
  • FTP server's port 20 to ports > 1023 (Server initiates data connection to client's data port)
  • FTP server's port 20 from ports > 1023 (Client sends ACKs to server's data port)
When drawn out, the connection appears as follows:

In step 1, the client's command port contacts the server's command port and sends the command PORT 1027. The server then sends an ACK back to the client's command port in step 2. In step 3 the server initiates a connection on its local data port to the data port the client specified earlier. Finally, the client sends an ACK back as shown in step 4.
The main problem with active mode FTP actually falls on the client side. The FTP client doesn't make the actual connection to the data port of the server--it simply tells the server what port it is listening on and the server connects back to the specified port on the client. From the client side firewall this appears to be an outside system initiating a connection to an internal client--something that is usually blocked.

Active FTP Example

Below is an actual example of an active FTP session. The only things that have been changed are the server names, IP addresses, and user names. In this example an FTP session is initiated from testbox1.slacksite.com (192.168.150.80), a linux box running the standard FTP command line client, to testbox2.slacksite.com (192.168.150.90), a linux box running ProFTPd 1.2.2RC2. The debugging (-d) flag is used with the FTP client to show what is going on behind the scenes. Everything in red is the debugging output which shows the actual FTP commands being sent to the server and the responses generated from those commands. Normal server output is shown in black, and user input is in bold.
There are a few interesting things to consider about this dialog. Notice that when the PORT command is issued, it specifies a port on the client (192.168.150.80) system, rather than the server. We will see the opposite behavior when we use passive FTP. While we are on the subject, a quick note about the format of the PORT command. As you can see in the example below it is formatted as a series of six numbers separated by commas. The first four octets are the IP address while the last two octets comprise the port that will be used for the data connection. To find the actual port multiply the fifth octet by 256 and then add the sixth octet to the total. Thus in the example below the port number is ( (14*256) + 178), or 3762. A quick check with netstat should confirm this information.
testbox1: {/home/p-t/slacker/public_html} % ftp -d testbox2
Connected to testbox2.slacksite.com.
220 testbox2.slacksite.com FTP server ready.
Name (testbox2:slacker): slacker
---> USER slacker
331 Password required for slacker.
Password: TmpPass
---> PASS XXXX
230 User slacker logged in.
---> SYST
215 UNIX Type: L8
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
ftp: setsockopt (ignored): Permission denied
---> PORT 192,168,150,80,14,178
200 PORT command successful.
---> LIST
150 Opening ASCII mode data connection for file list.
drwx------   3 slacker    users         104 Jul 27 01:45 public_html
226 Transfer complete.
ftp> quit
---> QUIT
221 Goodbye.

Passive FTP

In order to resolve the issue of the server initiating the connection to the client a different method for FTP connections was developed. This was known as passive mode, or PASV, after the command used by the client to tell the server it is in passive mode.
In passive mode FTP the client initiates both connections to the server, solving the problem of firewalls filtering the incoming data port connection to the client from the server. When opening an FTP connection, the client opens two random unprivileged ports locally (N > 1023 and N+1). The first port contacts the server on port 21, but instead of then issuing a PORT command and allowing the server to connect back to its data port, the client will issue the PASV command. The result of this is that the server then opens a random unprivileged port (P > 1023) and sends P back to the client in response to the PASV command. The client then initiates the connection from port N+1 to port P on the server to transfer data.
From the server-side firewall's standpoint, to support passive mode FTP the following communication channels need to be opened:
  • FTP server's port 21 from anywhere (Client initiates connection)
  • FTP server's port 21 to ports > 1023 (Server responds to client's control port)
  • FTP server's ports > 1023 from anywhere (Client initiates data connection to random port specified by server)
  • FTP server's ports > 1023 to remote ports > 1023 (Server sends ACKs (and data) to client's data port)
When drawn, a passive mode FTP connection looks like this:

In step 1, the client contacts the server on the command port and issues the PASV command. The server then replies in step 2 with PORT 2024, telling the client which port it is listening to for the data connection. In step 3 the client then initiates the data connection from its data port to the specified server data port. Finally, the server sends back an ACK in step 4 to the client's data port.
The second issue involves supporting and troubleshooting clients which do (or do not) support passive mode. As an example, the command line FTP utility provided with Solaris does not support passive mode, necessitating a third-party FTP client, such as ncftp. 
NOTE: This is no longer the case--use the 
-p option with the Solaris FTP client to enable passive mode!
With the massive popularity of the World Wide Web, many people prefer to use their web browser as an FTP client. Most browsers only support passive mode when accessing ftp:// URLs


PERSISTENCE


Session cookie

A user's session cookie[15] (also known as an in-memory cookie or transient cookie) for a website exists in temporary memory only while the user is reading and navigating the website. When an expiry date or validity interval is not set at cookie creation time, a session cookie is created. Web browsers normally delete session cookies when the user closes the browser.[16][17]

Persistent cookie

A persistent cookie[15] will outlast user sessions. If a persistent cookie has its Max-Age set to 1 year (for example), then, during that year, the initial value set in that cookie would be sent back to the server every time the user visited the server. This could be used to record a vital piece of information such as how the user initially came to this website. For this reason, persistent cookies are also called tracking cookies.

Secure cookie

A secure cookie has the secure attribute enabled and is only used via HTTPS, ensuring that the cookie is always encrypted when transmitting from client to server. This makes the cookie less likely to be exposed to cookie theft via eavesdropping. In addition to that, all cookies are subject to browser's same-origin policy.[18]

HttpOnly cookie

The HttpOnly attribute is supported by most modern browsers.[19][20] On a supported browser, an HttpOnly session cookie will be used only when transmitting HTTP (or HTTPS) requests, thus restricting access from other, non-HTTP APIs (such as JavaScript). This restriction mitigates but does not eliminate the threat of session cookie theft via cross-site scripting (XSS).[21] This feature applies only to session-management cookies, and not other browser cookies.

Third-party cookie

First-party cookies are cookies that belong to the same domain that is shown in the browser's address bar (or that belong to the sub domain of the domain in the address bar). Third-party cookies are cookies that belong to domains different from the one shown in the address bar. Web pages can feature content from third-party domains (such as banner adverts), which opens up the potential for tracking the user's browsing history. Privacy setting options in most modern browsers allow the blocking of third-party tracking cookies.
As an example, suppose a user visits www.example1.com. This web site contains an advert from ad.foxytracking.com, which, when downloaded, sets a cookie belonging to the advert's domain (ad.foxytracking.com). Then, the user visits another website, www.example2.com, which also contains an advert from ad.foxytracking.com, and which also sets a cookie belonging to that domain (ad.foxytracking.com). Eventually, both of these cookies will be sent to the advertiser when loading their ads or visiting their website. The advertiser can then use these cookies to build up a browsing history of the user across all the websites that have ads from this advertiser.
As of 2014, some websites were setting cookies readable for over 100 third-party domains.[22] On average, a single website was setting 10 cookies, with maximum number of cookies (first- and third-party) reaching over 800.[23]

Supercookie

A "supercookie" is a cookie with an origin of a Top-Level Domain (such as .com) or a Public Suffix (such as .co.uk). It is important that supercookies are blocked by browsers, due to the security holes they introduce. If unblocked, an attacker in control of a malicious website could set a supercookie and potentially disrupt or impersonate legitimate user requests to another website that shares the same Top-Level Domain or Public Suffix as the malicious website. For example, a supercookie with an origin of .com, could maliciously affect a request made to example.com, even if the cookie did not originate from example.com. This can be used to fake logins or change user information.
The Public Suffix List is a cross-vendor initiative to provide an accurate list of domain name suffixes changing. Older versions of browsers may not have the most up-to-date list, and will therefore be vulnerable to supercookies from certain domains.

Supercookie (other uses)

The term "supercookie" is sometimes used for tracking technologies that do not rely on HTTP cookies. Two such "supercookie" mechanisms were found on Microsoft websites: cookie syncing that respawned MUID (Machine Unique IDentifier) cookies, and ETag cookies.[24] Due to media attention, Microsoft later disabled this code:[25]
In response to recent attention on "supercookies" in the media, we wanted to share more detail on the immediate action we took to address this issue, as well as affirm our commitment to the privacy of our customers. According to researchers, including Jonathan Mayer at Stanford University, "supercookies" are capable of re-creating users' cookies or other identifiers after people deleted regular cookies. Mr. Mayer identified Microsoft as one among others that had this code, and when he brought his findings to our attention we promptly investigated. We determined that the cookie behavior he observed was occurring under certain circumstances as a result of older code that was used only on our own sites, and was already scheduled to be discontinued. We accelerated this process and quickly disabled this code. At no time did this functionality cause Microsoft cookie identifiers or data associated with those identifiers to be shared outside of Microsoft.

Setting a cookie

Transfer of Web pages follows the HyperText Transfer Protocol (HTTP). Regardless of cookies, browsers request a page from web servers by sending them a usually short text called HTTP request. For example, to access the page http://www.example.org/index.html, browsers connect to the server www.example.org sending it a request that looks like the following one:
GET /index.html HTTP/1.1
Host: www.example.org
browser
-------
server
The server replies by sending the requested page preceded by a similar packet of text, called 'HTTP response'. This packet may contain lines requesting the browser to store cookies:
browser
-------
server
The server sends lines of Set-Cookie only if the server wishes the browser to store cookies. Set-Cookie is a directive for the browser to store the cookie and send it back in future requests to the server (subject to expiration time or other cookie attributes), if the browser supports cookies and cookies are enabled. For example, the browser requests the page http://www.example.org/spec.html by sending the server www.example.org a request like the following:
server
This is a request for another page from the same server, and differs from the first one above because it contains the string that the server has previously sent to the browser. This way, the server knows that this request is related to the previous one. The server answers by sending the requested page, possibly adding other cookies as well.
The value of a cookie can be modified by the server by sending a new Set-Cookie: name=newvalue line in response of a page request. The browser then replaces the old value with the new one.
The value of a cookie may consist of any printable ascii character (! through ~, unicode \u0021 through \u007E) excluding , and ; and excluding whitespace. The name of the cookie also excludes = as that is the delimiter between the name and value. The cookie standard RFC2965 is more limiting but not implemented by browsers.
Some of the operations that can be done using cookies can also be done using other mechanisms.

IP address

Some users may be tracked based on the IP address of the computer requesting the page. The server knows the IP address of the computer running the browser or the proxy, if any is used, and could theoretically link a user's session to this IP address.
IP addresses are, generally, not a reliable way to track a session or identify a user. Many computers designed to be used by a single user, such as office PCs or home PCs, are behind a network address translator (NAT). This means that several PCs will share a public IP address. Furthermore, some systems, such as Tor, are designed to retain Internet anonymity, rendering tracking by IP address impractical, impossible, or a security risk.

Setting Cookies

Servers supply cookies by populating the set-cookie response header with the following details:
Name
Name of the cookie
Value
Textual value to be held by the cookie
Expires
Date/time when the cookie should be discarded by the browser.
If this field is empty the cookie expires at the end of the current browser session. This field can also be used to delete a cookie by setting a date/time in the past.
Path
Path below which the cookie should be supplied by the browser.
Domain
Web site domain to which this cookie applies.
This will default to the current domain and attempts to set cookies on other domains are subject to the privacy controls built into the browser.
Cookies are usually small text files, given ID tags that are stored on your computer's browser directory or program data subfolders. Cookies are created when you use your browser to visit
There are two types of cookies: session cookies and persistent cookies. Session cookies are created temporarily in your browser's subfolder while you are visiting a website. Once you leave the site, the session cookie is deleted. 
persistent cookie files remain in your browser's subfolder and are activated again once you visit the website that created that particular cookie. A persistent cookie remains in the browser's subfolder for the duration period set within the cookie's file.

Description: http://support.f5.com/images/assets/bullet.gif
Cookie persistence
Cookie persistence uses an HTTP cookie stored on a clients computer to allow the client to reconnect to the same server previously visited at a web site.
Description: http://support.f5.com/images/assets/bullet.gif
Destination address affinity persistence 
Also known as sticky persistence, destination address affinity persistence supports TCP and UDP protocols, and directs session requests to the same server based solely on the destination IP address of a packet.

Description: http://support.f5.com/images/assets/bullet.gif
Hash persistence
Hash persistence allows you to create a persistence hash based on an existing iRule.
Description: http://support.f5.com/images/assets/bullet.gif
Microsoft® Remote Desktop Protocol persistence
Microsoft® Remote Desktop Protocol (MSRDP) persistence tracks sessions between clients and servers running the Microsoft® Remote Desktop Protocol (RDP) service.

Description: http://support.f5.com/images/assets/bullet.gif
SIP persistence
SIP persistence is a type of persistence used for servers that receive Session Initiation Protocol (SIP) messages sent through UDP, SCTP, or TCP.
Description: http://support.f5.com/images/assets/bullet.gif
Source address affinity persistence
Also known as simple persistence, source address affinity persistence supports TCP and UDP protocols, and directs session requests to the same server based solely on the source IP address of a packet.

Description: http://support.f5.com/images/assets/bullet.gif
SSL persistence
SSL persistence is a type of persistence that tracks non-terminated SSL sessions, using the SSL session ID. To enable persistence for terminated SSL sessions,
Description: http://support.f5.com/images/assets/bullet.gif
Universal persistence
Universal persistence allows you to write an expression that defines what to persist on in a packet. The expression, written using the same expression syntax that you use in iRulesTM, defines some sequence of bytes to use as a session identifier.
You can set up the BIG-IP system to use HTTP cookie persistence. Cookie persistence uses an HTTP cookie stored on a clients computer to allow the client to reconnect to the same pool member previously visited at a web site.
Description: http://support.f5.com/images/assets/bullet.gif
Description: http://support.f5.com/images/assets/bullet.gif

Description: http://support.f5.com/images/assets/bullet.gif
Understanding Cookie profile settings
To implement cookie persistence, you can either use the default cookie profile, or create a custom profile. Table 7.1 shows the settings and values that make up a Cookie profile.

This value isautogenerated based on the pool name.
Sets the expiration time of the cookie. Applies to the HTTP Cookie Insert and HTTP Cookie Rewrite methods only. When using the default (checked), the system uses the expiration time specified in the session cookie.
0
0
This setting applies to the Cookie Hashmethod only. The setting specifies the duration, in seconds, of a persistence entry. For background information on setting timeout values, see Chapter 1, Introducing Local Traffic Management.
Specifies, when enabled (checked), that if the active unit goes into the standby mode, the system mirrors any persistence records to its peer. With respect to Cookie profiles, this setting applies to the Cookie Hash method only.



LTM



§  Random: This load balancing method randomly distributes load across the servers available, picking one via random number generation and sending the current connection to it. While it is available on many load balancing products, its usefulness is questionable except where uptime is concerned – and then only if you detect down machines.
§  Round Robin: Round Robin passes each new connection request to the next server in line, eventually distributing connections evenly across the array of machines being load balanced. Round Robin works well in most configurations, but could be better if the equipment that you are load balancing is not roughly equal in processing speed, connection speed, and/or memory.
§  Weighted Round Robin (called Ratio on the BIG-IP): With this method, the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine. This is an improvement over Round Robin because you can say “Machine 3 can handle 2x the load of machines 1 and 2”, and the load balancer will send two requests to machine #3 for each request to the others.
.
§  Dynamic Round Robin (Called Dynamic Ratio on the BIG-IP): is similar to Weighted Round Robin, however, weights are based on continuous monitoring of the servers and are therefore continually changing. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time. This Application Delivery Controller method is rarely available in a simple load balancer.
§  Fastest: The Fastest method passes a new connection based on the fastest response time of all servers. This method may be particularly useful in environments where servers are distributed across different logical networks. On the BIG-IP, only servers that are active will be selected.
§  Least Connections: With this method, the system passes a new connection to the server that has the least number of current connections. Least Connections methods work best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time. This Application Delivery Controller method is rarely available in a simple load balancer.
§  Observed: The Observed method uses a combination of the logic used in the Least Connections and Fastest algorithms to load balance connections to servers being load-balanced. With this method, servers are ranked based on a combination of the number of current connections and the response time. Servers that have a better balance of fewest connections and fastest response time receive a greater proportion of the connections. This Application Delivery Controller method is rarely available in a simple load balancer.
§  Predictive: The Predictive method uses the ranking method used by the Observed method, however, with the Predictive method, the system analyzes the trend of the ranking over time, determining whether a servers performance is currently improving or declining. The servers in the specified pool with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. The Predictive methods work well in any environment. This Application Delivery Controller method israrely available in a simple load balancer.
§  Maintaining the L2 forwarding table
§   
§  Layer 2 forwarding is the means by which frames are exchanged directly between hosts, with no IP routing required. This is accomplished using a simple forwarding table for each VLAN. The L2 forwarding table is a list that shows, for each host in the VLAN, the MAC address of the host, along with the interface that the BIG-IP system needs for sending frames to that host. The intent of the L2 forwarding table is to help the BIG-IP system determine the correct interface for sending frames, when the system determines that no routing is required.
§  The format of an entry in the L2 forwarding table is:
§  ->
§  For example, an entry for a host in the VLAN might look like this:
§  00:a0:c9:9e:1e:2f -> 2.1
§  VLAN groups reside in administrative partitions. To create a VLAN group, you must first set the current partition to the partition in which you want the VLAN group to reside.
§  the BIG-IP system assigns an ID automatically. The value of a VLAN group ID can be between 1 and 4094.
§  To create a VLAN group
1.
On the Main tab of the navigation pane, expand Network and click VLANs.
This displays a list of all existing VLANs.
2.
On the menu bar, from VLAN Groups, choose List.
This displays a list of all existing VLAN groups.
3.
In the upper-right corner, click Create.
The VLAN Groups screen opens.
§  After you create a VLAN or a VLAN group, you must associate it with a self IP address. You associate a VLAN or VLAN group with a self IP address using the New Self IPs screens of the Configuration utility:
Description: http://support.f5.com/dam/f5/corp/global/images/assets/bullet.gif
Associating a VLAN with a self IP address
The self IP address with which you associate a VLAN should represent an address space that includes the IP addresses of the hosts that the VLAN contains. For example, if the address of one host is 11.0.0.1 and the address of the other host is 11.0.0.2, you could associate the VLAN with a self IP address of 11.0.0.100, with a netmask of 255.255.255.0.
Description: http://support.f5.com/dam/f5/corp/global/images/assets/bullet.gif
Associating a VLAN group with a self IP address
The self IP address with which you associate a VLAN group should represent an address space that includes the self IP addresses of the VLANs that you assigned to the group. For example, if the address of one VLAN is 10.0.0.1 and the address of the other VLAN is 10.0.0.2, you could associate the VLAN group with a self IP address of 10.0.0.100, with a netmask of 255.255.255.0.
§  Assigning VLANs to route domains
§  If you explicitly create route domains, you should consider the following:
You can assign VLANs (and VLAN groups) to route domain objects that you create. Traffic pertaining to that route domain uses those assigned VLANs,
During BIG-IP system installation, the system automatically creates a default route domain, with an ID of 0. Route domain 0 has two VLANs assigned to it, VLAN internal and VLAN external.
If you create one or more VLANs in an administrative partition other than Common, but do not create a route domain in that partition, then the VLANs you create in that partition are automatically assigned to route domain 0.
§   
§   Link aggregation is the process of combining multiple links so that the links function as a single link with higher bandwidth. Link aggregation occurs when you create a trunk. A trunk is a combination of two or more interfaces and cables configured as one link.

What is a traffic group?

A traffic group is a collection of related configuration objects, such as a floating self IP address and a virtual IP address, that run on a BIG-IP® device. Together, these objects process a particular type of traffic on that device. When a BIG-IP device becomes unavailable, a traffic group floats (that is, fails over) to another device in a device group to ensure that application traffic continues to be processed with little to no interruption in service. In general, a traffic group ensures that when a device becomes unavailable, all of the failover objects in the traffic group fail over to any one of the devices in the device group, based on the number of active traffic groups on each device.
An example of a set of objects in a traffic group is an iApps™ application service. If a device with this traffic group is a member of a device group, and the device becomes unavailable, the traffic group floats to another member of the device group, and that member becomes the device that processes the application traffic.

About active-standby vs. active-active configurations

A device group that contains only one traffic group is known as an active-standby configuration.
A device group that contains two or more traffic groups is known as an active-active configuration. For example, if you configure multiple virtual IP addresses on the BIG-IP system to process traffic for different applications, you might want to create separate traffic groups that each contains a virtual IP address and its relevant floating self IP address. You can then choose to make all of the traffic groups active on one device in the device group, or you can balance the traffic group load by making some of the traffic groups active on other devices in the device group.

About active and standby failover states

During any config sync operation, each traffic group within a device group is synchronized to the other device group members. Therefore, on each device, a particular traffic group is in either an active state or a standby state. In an active state, a traffic group on a device processes application traffic. In a standby state, a traffic group on a device is idle.
For example, on Device A, traffic-group-1 might be active, and on Device B, traffic-group-1 might be standby. Similarly, on Device B, traffic-group-2 might be active, traffic-group-1 might be standby.
When a device with an active traffic group becomes unavailable, the active traffic group floats to another device, choosing whichever device in the device group is most available at that moment. The term floats means that on the target device, the traffic group switches from a standby state to an active state.
The following illustration shows a typical device group configuration with two devices and one traffic group (named my_traffic_group). In this illustration, the traffic group is active on Device A and standby on Device B prior to failover.
Traffic group states before failover
If failover occurs, the traffic group becomes active on the other device. In the following illustration, Device A has become unavailable, causing the traffic group to become active on Device B and process traffic on that device.
Traffic group states after failover
When Device A comes back online, the traffic group becomes standby on that device.

Viewing the failover state of a device

You can use the BIG-IP® Configuration utility to view the current failover state of a device in a device group.
1.     Display any screen of the BIG-IP Configuration utility.
2.     In the upper left corner of the screen, view the failover state of the device. An Active failover state indicates that at least one traffic group is currently active on the device. A Standby failover state indicates that all traffic groups on the device are in a standby state.

Viewing the failover state of a traffic group

You can use the BIG-IP® Configuration utility to view the current state of all traffic groups on the device.
1.     On the Main tab, click Network > Traffic Groups.
2.     In the Failover Status area of the screen, view the state of a traffic group on the device.

Forcing a traffic group to a standby state

This task causes the selected traffic group on the local device to switch to a standby state. By forcing the traffic group into a standby state, the traffic group becomes active on another device in the device group. For device groups with more than two members, you can choose the specific device to which the traffic group fails over. This task is optional.
1.     Log in to the device on which the traffic group is currently active.
2.     On the Main tab, click Network > Traffic Groups.
3.     In the Name column, locate the name of the traffic group that you want to run on the peer device.
4.     Select the check box to the left of the traffic group name. If the check box is unavailable, the traffic group is not active on the device to which you are currently logged in. Perform this task on the device on which the traffic group is active.
5.     Click Force to Standby. This displays target device options.
6.     Choose one of these actions:
o   If the device group has two members only, click Force to Standby. This displays the list of traffic groups for the device group and causes the local device to appear in the Next Active Device column.
o   If the device group has more than two members, then from the Target Device list, select a value and click Force to Standby.
The selected traffic group is now active on another device in the device group.

About default traffic groups on the system

Each BIG-IP® device contains two default traffic groups:
·       A default traffic group named traffic-group-1 initially contains the floating self IP addresses that you configured for VLANs internal andexternal, as well as any iApps™ application services, virtual IP addresses, NATs, or SNAT translation addresses that you have configured on the device.
·       A default non-floating traffic group named traffic-group-local-only contains the static self IP addresses that you configured for VLANsinternal and external. Because the device is not a member of device group, the traffic group never fails over to another device.

About MAC masquerade addresses and failover

A MAC masquerade address is a unique, floating Media Access Control (MAC) address that you create and control. You can assign one MAC masquerade address to each traffic group on a BIG-IP device. By assigning a MAC masquerade address to a traffic group, you indirectly associate that address with any floating IP addresses (services) associated with that traffic group. With a MAC masquerade address per traffic group, a single VLAN can potentially carry traffic and services for multiple traffic groups, with each service having its own MAC masquerade address.
A primary purpose of a MAC masquerade address is to minimize ARP communications or dropped packets as a result of a failover event. A MAC masquerade address ensures that any traffic destined for the relevant traffic group reaches an available device after failover has occurred, because the MAC masquerade address floats to the available device along with the traffic group. Without a MAC masquerade address, on failover the sending host must relearn the MAC address for the newly-active device, either by sending an ARP request for the IP address for the traffic or by relying on the gratuitous ARP from the newly-active device to refresh its stale ARP entry.
The assignment of a MAC masquerade address to a traffic group is optional. Also, there is no requirement for a MAC masquerade address to reside in the same MAC address space as that of the BIG-IP device.

About failover objects and traffic group association

A floating traffic group contains the specific floating configuration objects that are required for processing a particular type of application traffic. The types of configuration objects that you can include in a floating traffic group are:
·       iApps™ application services
·       Virtual IP addresses
·       NATs
·       SNAT translation addresses
·       Self IP addresses
You can associate configuration objects with a traffic group in these ways:
·       You can rely on the folders in which the objects reside to inherit the traffic group that you assign to the root folder.
·       You can create an iApp application service, assigning a traffic group to the application service in that process.
·       You can use the BIG-IP® Configuration utility or tmsh to directly assign a traffic group to an object or a folder.

Viewing failover objects for a traffic group

You can use the BIG-IP® Configuration utility to view a list of all failover objects associated with a specific traffic group. For each failover object, the list shows the name of the object, the type of object, and the folder in which the object resides.
1.     On the Main tab, click Network > Traffic Groups.
2.     In the Name column, click the name of the traffic group for which you want to view the associated objects.
3.     On the menu bar, click Failover Objects. The screen displays the failover objects that are members of the selected traffic group.

About device selection for failover

When a traffic group fails over to another device in the device group, the device that the system selects is normally the device with the least number of active traffic groups. When you initially create the traffic group on a device, however, you specify the device in the group that you prefer that traffic group to run on in the event that the available devices have an equal number of active traffic groups (that is, no device has fewer active traffic groups than another). Note that, in general, the system considers the most available device in a device group to be the device that contains the fewest active traffic groups at any given time.
Within a Sync-Failover type of device group, each BIG-IP® device has a specific designation with respect to a traffic group. That is, a device in the device group can be a default device, as well as a current device or a next active device.
Table 1. Default, current, and next active devices
TARGET DEVICE
DESCRIPTION
Default Device
A default device is a device that you specify on which a traffic group runs after failover. A traffic group fails over to the default device in these cases:
·       When you have enabled auto-failback for a traffic group.
·       When all available devices in the group are equal with respect to the number of active traffic groups. For example, suppose that during traffic group creation you designated Device B to be the default device. If failover occurs and Device B and Device C have the same number of active traffic groups, the traffic group will fail over to Device B, the default device.
The default device designation is a user-modifiable property of a traffic group. You actively specify a default device for a traffic group when you create the traffic group.
Current Device
A current device is the device on which a traffic group is currently running. For example, if Device A is currently processing traffic using the objects in Traffic-Group-1, then Device A is the current device. If Device A becomes unavailable and Traffic-Group-1 fails over to Device C (currently the device with the fewest number of active traffic groups), then Device C becomes the current device. The current device is system-selected, and might or might not be the default device.
Next Active Device
A next active device is the device currently designated to accept a traffic group if failover of a traffic group should occur. For example, iftraffic-group-1 is running on Device A, and the designated device for future failover is currently Device C, then Device C is the next active device. The next active device can be either system- or user-selected, and might or might not be the default device.

About automatic failback

The failover feature includes an option known as auto-failback. When you enable auto-failback, a traffic group that has failed over to another device fails back to its default device whenever that default device is available to process the traffic. This occurs even when other devices in the group are more available than the default device to process the traffic.
If auto-failback is not enabled for a traffic group and the traffic group fails over to another device, the traffic group runs on the failover (now current) device until that device becomes unavailable. In that event, the traffic group fails over to the most available device in the group. The traffic group only fails over to its default device when the availability of the default device equals or exceeds the availability of another device in the group.

Managing automatic failback

You can use the BIG-IP® Configuration utility to manage the auto-failback option for a traffic group.
1.     On the Main tab, click Network > Traffic Groups.
2.     In the Name column, click the name of the traffic group for which you want to view the associated objects.
3.     In the General Properties area of the screen, select or clear the Auto Failback check box.
o   Selecting the check box causes the traffic group to be active on its default device whenever that device is as available or more available than another device in the group.
o   Clearing the check box causes the traffic group to remain active on its current device until failover occurs again.
4.     If auto-failback is enabled, in the Auto Failback Timeout field, type the number of seconds after which auto-failback expires.
5.     Click Update.

Before you configure a traffic group

The following configuration restrictions apply to traffic groups:
·       On each device in a Sync-Failover device group, the BIG-IP® system automatically assigns the default floating traffic group name to the root and/Common folders. This ensures that the system fails over any traffic groups for that device to an available device in the device group.
·       The BIG-IP system creates all traffic-groups in the /Common folder, regardless of the partition to which the system is currently set.
·       Any traffic group named other than traffic-group-local-only is a floating traffic group.
·       You can set a traffic group on a folder to a floating traffic group only when the device group set on the folder is a Sync-Failover type of device-group.
·       If there is no Sync-Failover device group defined on the device, you can set a floating traffic group on a folder that inherits its device group fromroot or /Common.
·       Setting the traffic group on a failover object to traffic-group-local-only prevents the system from synchronizing that object to other devices in the device group.
·       You can set a floating traffic group on only those objects that reside in a folder with a device group of type Sync-Failover.
·       If no Sync-Failover device group exists, you can set floating traffic groups on objects in folders that inherit their device group from the root or/Common folders.

Specifying IP addresses for failover

This task specifies the local IP addresses that you want other devices in the device group to use for failover communications with the local device. You must perform this task locally on each device in the device group.
Note: The failover addresses that you specify must belong to route domain 0.
1.     Confirm that you are logged in to the actual device you want to configure.
2.     On the Main tab, click Device Management > Devices. This displays a list of device objects discovered by the local device.
3.     In the Name column, click the name of the device to which you are currently logged in.
4.     From the Device Connectivity menu, choose Failover.
5.     For the Failover Unicast Configuration settings, retain the displayed IP addresses. You can also click Add to specify additional IP addresses that the system can use for failover communications. F5 Networks recommends that you use the self IP address assigned to the HA VLAN.
6.     If the BIG-IP® system is running on a VIPRION® platform, then for the Use Failover Multicast Address setting, select the Enabled check box.
7.     If you enable Use Failover Multicast Address, either accept the default Address and Port values, or specify values appropriate for the device. If you revise the default Address and Port values, but then decide to revert to the default values, click Reset Defaults.
8.     Click Update.
After you perform this task, other devices in the device group can send failover messages to the local device using the specified IP addresses.

Creating a traffic group

If you intend to specify a MAC masquerade address when creating a traffic group, you must first create the address, using an industry-standard method for creating a locally administered MAC address.
Perform this task when you want to create a traffic group for a BIG-IP® device. You can perform this task on any BIG-IP device within the device group, and the
.
1.     On the Main tab, click Network > Traffic Groups.
2.     On the Traffic Groups list screen, click Create.
3.     In the Name field, type a name for the new traffic group.
4.     In the Description field, type a description for the new traffic group.
5.     Select a default device (a remote device) for the new traffic group.
6.     In the MAC Masquerade Address field, type a MAC masquerade address. When you specify a MAC masquerade address, you reduce the risk of dropped connections when failover occurs. This setting is optional.
7.     Select or clear the check box for the Auto Failback setting.
o   If you select the check box, it causes the traffic group to be active on its default device whenever that device is as available, or more available, than another device in the group.
o   If you clear the check box, it causes the traffic group to remain active on its current device until failover occurs again.
8.     If auto-failback is enabled, in the Auto Failback Timeout field, type the number of seconds after which auto-failback expires.
9.     Confirm that the displayed traffic group settings are correct.
10. Click Finished.
You now have a floating traffic group with a default device specified.

Viewing a list of traffic groups for a device

You can view a list of the traffic groups that you previously created on the device.
1.     On the Main tab, click Network > Traffic Groups.
2.     In the Name column, view the names of the traffic groups on the local device.

Traffic group properties

This table lists and describes the properties of a traffic group.
PROPERTY
DESCRIPTION
Name
The name of the traffic group, such as Traffic-Group-1.
Partition / Path
The name of the folder or sub-folder in which the traffic group resides.
Description
A user-defined description of the traffic group.
Default Device
The device with which a traffic group has some affinity when auto-failback is not enabled.
Current Device
The device on which a traffic group is currently running.
Next Active Device
The device currently most available to accept a traffic group if failover of that traffic group should occur.
MAC Masquerade Address
A user-created MAC address that floats on failover, to minimize ARP communications and dropped connections.
Auto Failback
The condition where the traffic group tries to fail back to the default device whenever possible.
Auto Failback Timeout
The number of seconds before auto failback expires. This setting appear only when you enable the Auto Failback setting.
Floating
A designation that enables the traffic group to float to another device in the device group when failover occurs.

 PERSISTENCE



Session cookie

A user's session cookie[15] (also known as an in-memory cookie or transient cookie) for a website exists in temporary memory only while the user is reading and navigating the website. When an expiry date or validity interval is not set at cookie creation time, a session cookie is created. Web browsers normally delete session cookies when the user closes the browser.[16][17]

Persistent cookie

A persistent cookie[15] will outlast user sessions. If a persistent cookie has its Max-Age set to 1 year (for example), then, during that year, the initial value set in that cookie would be sent back to the server every time the user visited the server. This could be used to record a vital piece of information such as how the user initially came to this website. For this reason, persistent cookies are also called tracking cookies.

Secure cookie

A secure cookie has the secure attribute enabled and is only used via HTTPS, ensuring that the cookie is always encrypted when transmitting from client to server. This makes the cookie less likely to be exposed to cookie theft via eavesdropping. In addition to that, all cookies are subject to browser's same-origin policy.[18]

HttpOnly cookie

The HttpOnly attribute is supported by most modern browsers.[19][20] On a supported browser, an HttpOnly session cookie will be used only when transmitting HTTP (or HTTPS) requests, thus restricting access from other, non-HTTP APIs (such as JavaScript). This restriction mitigates but does not eliminate the threat of session cookie theft via cross-site scripting (XSS).[21] This feature applies only to session-management cookies, and not other browser cookies.

Third-party cookie

First-party cookies are cookies that belong to the same domain that is shown in the browser's address bar (or that belong to the sub domain of the domain in the address bar). Third-party cookies are cookies that belong to domains different from the one shown in the address bar. Web pages can feature content from third-party domains (such as banner adverts), which opens up the potential for tracking the user's browsing history. Privacy setting options in most modern browsers allow the blocking of third-party tracking cookies.
As an example, suppose a user visits www.example1.com. This web site contains an advert from ad.foxytracking.com, which, when downloaded, sets a cookie belonging to the advert's domain (ad.foxytracking.com). Then, the user visits another website, www.example2.com, which also contains an advert from ad.foxytracking.com, and which also sets a cookie belonging to that domain (ad.foxytracking.com). Eventually, both of these cookies will be sent to the advertiser when loading their ads or visiting their website. The advertiser can then use these cookies to build up a browsing history of the user across all the websites that have ads from this advertiser.
As of 2014, some websites were setting cookies readable for over 100 third-party domains.[22] On average, a single website was setting 10 cookies, with maximum number of cookies (first- and third-party) reaching over 800.[23]

Supercookie

A "supercookie" is a cookie with an origin of a Top-Level Domain (such as .com) or a Public Suffix (such as .co.uk). It is important that supercookies are blocked by browsers, due to the security holes they introduce. If unblocked, an attacker in control of a malicious website could set a supercookie and potentially disrupt or impersonate legitimate user requests to another website that shares the same Top-Level Domain or Public Suffix as the malicious website. For example, a supercookie with an origin of .com, could maliciously affect a request made to example.com, even if the cookie did not originate from example.com. This can be used to fake logins or change user information.
The Public Suffix List is a cross-vendor initiative to provide an accurate list of domain name suffixes changing. Older versions of browsers may not have the most up-to-date list, and will therefore be vulnerable to supercookies from certain domains.

Supercookie (other uses)

The term "supercookie" is sometimes used for tracking technologies that do not rely on HTTP cookies. Two such "supercookie" mechanisms were found on Microsoft websites: cookie syncing that respawned MUID (Machine Unique IDentifier) cookies, and ETag cookies.[24] Due to media attention, Microsoft later disabled this code:[25]
In response to recent attention on "supercookies" in the media, we wanted to share more detail on the immediate action we took to address this issue, as well as affirm our commitment to the privacy of our customers. According to researchers, including Jonathan Mayer at Stanford University, "supercookies" are capable of re-creating users' cookies or other identifiers after people deleted regular cookies. Mr. Mayer identified Microsoft as one among others that had this code, and when he brought his findings to our attention we promptly investigated. We determined that the cookie behavior he observed was occurring under certain circumstances as a result of older code that was used only on our own sites, and was already scheduled to be discontinued. We accelerated this process and quickly disabled this code. At no time did this functionality cause Microsoft cookie identifiers or data associated with those identifiers to be shared outside of Microsoft.

Setting a cookie

Transfer of Web pages follows the HyperText Transfer Protocol (HTTP). Regardless of cookies, browsers request a page from web servers by sending them a usually short text called HTTP request. For example, to access the page http://www.example.org/index.html, browsers connect to the server www.example.org sending it a request that looks like the following one:
GET /index.html HTTP/1.1
Host: www.example.org
browser
-------
server
The server replies by sending the requested page preceded by a similar packet of text, called 'HTTP response'. This packet may contain lines requesting the browser to store cookies:
browser
-------
server
The server sends lines of Set-Cookie only if the server wishes the browser to store cookies. Set-Cookie is a directive for the browser to store the cookie and send it back in future requests to the server (subject to expiration time or other cookie attributes), if the browser supports cookies and cookies are enabled. For example, the browser requests the page http://www.example.org/spec.html by sending the server www.example.org a request like the following:
server
This is a request for another page from the same server, and differs from the first one above because it contains the string that the server has previously sent to the browser. This way, the server knows that this request is related to the previous one. The server answers by sending the requested page, possibly adding other cookies as well.
The value of a cookie can be modified by the server by sending a new Set-Cookie: name=newvalue line in response of a page request. The browser then replaces the old value with the new one.
The value of a cookie may consist of any printable ascii character (! through ~, unicode \u0021 through \u007E) excluding , and ; and excluding whitespace. The name of the cookie also excludes = as that is the delimiter between the name and value. The cookie standard RFC2965 is more limiting but not implemented by browsers.
Some of the operations that can be done using cookies can also be done using other mechanisms.

IP address

Some users may be tracked based on the IP address of the computer requesting the page. The server knows the IP address of the computer running the browser or the proxy, if any is used, and could theoretically link a user's session to this IP address.
IP addresses are, generally, not a reliable way to track a session or identify a user. Many computers designed to be used by a single user, such as office PCs or home PCs, are behind a network address translator (NAT). This means that several PCs will share a public IP address. Furthermore, some systems, such as Tor, are designed to retain Internet anonymity, rendering tracking by IP address impractical, impossible, or a security risk.

Setting Cookies

Servers supply cookies by populating the set-cookie response header with the following details:
Name
Name of the cookie
Value
Textual value to be held by the cookie
Expires
Date/time when the cookie should be discarded by the browser.
If this field is empty the cookie expires at the end of the current browser session. This field can also be used to delete a cookie by setting a date/time in the past.
Path
Path below which the cookie should be supplied by the browser.
Domain
Web site domain to which this cookie applies.
This will default to the current domain and attempts to set cookies on other domains are subject to the privacy controls built into the browser.
Cookies are usually small text files, given ID tags that are stored on your computer's browser directory or program data subfolders. Cookies are created when you use your browser to visit
There are two types of cookies: session cookies and persistent cookies. Session cookies are created temporarily in your browser's subfolder while you are visiting a website. Once you leave the site, the session cookie is deleted. 
persistent cookie files remain in your browser's subfolder and are activated again once you visit the website that created that particular cookie. A persistent cookie remains in the browser's subfolder for the duration period set within the cookie's file.

Description: http://support.f5.com/images/assets/bullet.gif
Cookie persistence
Cookie persistence uses an HTTP cookie stored on a clients computer to allow the client to reconnect to the same server previously visited at a web site.
Description: http://support.f5.com/images/assets/bullet.gif
Destination address affinity persistence 
Also known as sticky persistence, destination address affinity persistence supports TCP and UDP protocols, and directs session requests to the same server based solely on the destination IP address of a packet.

Description: http://support.f5.com/images/assets/bullet.gif
Hash persistence
Hash persistence allows you to create a persistence hash based on an existing iRule.
Description: http://support.f5.com/images/assets/bullet.gif
Microsoft® Remote Desktop Protocol persistence
Microsoft® Remote Desktop Protocol (MSRDP) persistence tracks sessions between clients and servers running the Microsoft® Remote Desktop Protocol (RDP) service.

Description: http://support.f5.com/images/assets/bullet.gif
SIP persistence
SIP persistence is a type of persistence used for servers that receive Session Initiation Protocol (SIP) messages sent through UDP, SCTP, or TCP.
Description: http://support.f5.com/images/assets/bullet.gif
Source address affinity persistence
Also known as simple persistence, source address affinity persistence supports TCP and UDP protocols, and directs session requests to the same server based solely on the source IP address of a packet.

Description: http://support.f5.com/images/assets/bullet.gif
SSL persistence
SSL persistence is a type of persistence that tracks non-terminated SSL sessions, using the SSL session ID. To enable persistence for terminated SSL sessions,
Description: http://support.f5.com/images/assets/bullet.gif
Universal persistence
Universal persistence allows you to write an expression that defines what to persist on in a packet. The expression, written using the same expression syntax that you use in iRulesTM, defines some sequence of bytes to use as a session identifier.
You can set up the BIG-IP system to use HTTP cookie persistence. Cookie persistence uses an HTTP cookie stored on a clients computer to allow the client to reconnect to the same pool member previously visited at a web site.
Description: http://support.f5.com/images/assets/bullet.gif
Description: http://support.f5.com/images/assets/bullet.gif

Description: http://support.f5.com/images/assets/bullet.gif
Understanding Cookie profile settings
To implement cookie persistence, you can either use the default cookie profile, or create a custom profile. Table 7.1 shows the settings and values that make up a Cookie profile.

This value isautogenerated based on the pool name.
Sets the expiration time of the cookie. Applies to the HTTP Cookie Insert and HTTP Cookie Rewrite methods only. When using the default (checked), the system uses the expiration time specified in the session cookie.
0
0
This setting applies to the Cookie Hashmethod only. The setting specifies the duration, in seconds, of a persistence entry. For background information on setting timeout values, see Chapter 1, Introducing Local Traffic Management.
Specifies, when enabled (checked), that if the active unit goes into the standby mode, the system mirrors any persistence records to its peer. With respect to Cookie profiles, this setting applies to the Cookie Hash method only.




PERSISTENCE


Session cookie

A user's session cookie[15] (also known as an in-memory cookie or transient cookie) for a website exists in temporary memory only while the user is reading and navigating the website. When an expiry date or validity interval is not set at cookie creation time, a session cookie is created. Web browsers normally delete session cookies when the user closes the browser.[16][17]

Persistent cookie

A persistent cookie[15] will outlast user sessions. If a persistent cookie has its Max-Age set to 1 year (for example), then, during that year, the initial value set in that cookie would be sent back to the server every time the user visited the server. This could be used to record a vital piece of information such as how the user initially came to this website. For this reason, persistent cookies are also called tracking cookies.

Secure cookie

A secure cookie has the secure attribute enabled and is only used via HTTPS, ensuring that the cookie is always encrypted when transmitting from client to server. This makes the cookie less likely to be exposed to cookie theft via eavesdropping. In addition to that, all cookies are subject to browser's same-origin policy.[18]

HttpOnly cookie

The HttpOnly attribute is supported by most modern browsers.[19][20] On a supported browser, an HttpOnly session cookie will be used only when transmitting HTTP (or HTTPS) requests, thus restricting access from other, non-HTTP APIs (such as JavaScript). This restriction mitigates but does not eliminate the threat of session cookie theft via cross-site scripting (XSS).[21] This feature applies only to session-management cookies, and not other browser cookies.

Third-party cookie

First-party cookies are cookies that belong to the same domain that is shown in the browser's address bar (or that belong to the sub domain of the domain in the address bar). Third-party cookies are cookies that belong to domains different from the one shown in the address bar. Web pages can feature content from third-party domains (such as banner adverts), which opens up the potential for tracking the user's browsing history. Privacy setting options in most modern browsers allow the blocking of third-party tracking cookies.
As an example, suppose a user visits www.example1.com. This web site contains an advert from ad.foxytracking.com, which, when downloaded, sets a cookie belonging to the advert's domain (ad.foxytracking.com). Then, the user visits another website, www.example2.com, which also contains an advert from ad.foxytracking.com, and which also sets a cookie belonging to that domain (ad.foxytracking.com). Eventually, both of these cookies will be sent to the advertiser when loading their ads or visiting their website. The advertiser can then use these cookies to build up a browsing history of the user across all the websites that have ads from this advertiser.
As of 2014, some websites were setting cookies readable for over 100 third-party domains.[22] On average, a single website was setting 10 cookies, with maximum number of cookies (first- and third-party) reaching over 800.[23]

Supercookie

A "supercookie" is a cookie with an origin of a Top-Level Domain (such as .com) or a Public Suffix (such as .co.uk). It is important that supercookies are blocked by browsers, due to the security holes they introduce. If unblocked, an attacker in control of a malicious website could set a supercookie and potentially disrupt or impersonate legitimate user requests to another website that shares the same Top-Level Domain or Public Suffix as the malicious website. For example, a supercookie with an origin of .com, could maliciously affect a request made to example.com, even if the cookie did not originate from example.com. This can be used to fake logins or change user information.
The Public Suffix List is a cross-vendor initiative to provide an accurate list of domain name suffixes changing. Older versions of browsers may not have the most up-to-date list, and will therefore be vulnerable to supercookies from certain domains.

Supercookie (other uses)


The term "supercookie" is sometimes used for tracking technologies that do not rely on HTTP cookies. Two such "supercookie" mechanisms were found on Microsoft websites: cookie syncing that respawned MUID (Machine Unique IDentifier) cookies, and ETag cookies.[24] Due to media attention, Microsoft later disabled this code:[25]
In response to recent attention on "supercookies" in the media, we wanted to share more detail on the immediate action we took to address this issue, as well as affirm our commitment to the privacy of our customers. According to researchers, including Jonathan Mayer at Stanford University, "supercookies" are capable of re-creating users' cookies or other identifiers after people deleted regular cookies. Mr. Mayer identified Microsoft as one among others that had this code, and when he brought his findings to our attention we promptly investigated. We determined that the cookie behavior he observed was occurring under certain circumstances as a result of older code that was used only on our own sites, and was already scheduled to be discontinued. We accelerated this process and quickly disabled this code. At no time did this functionality cause Microsoft cookie identifiers or data associated with those identifiers to be shared outside of Microsoft.

Setting a cookie

Transfer of Web pages follows the HyperText Transfer Protocol (HTTP). Regardless of cookies, browsers request a page from web servers by sending them a usually short text called HTTP request. For example, to access the page http://www.example.org/index.html, browsers connect to the server www.example.org sending it a request that looks like the following one:
GET /index.html HTTP/1.1
Host: www.example.org
browser
-------
server
The server replies by sending the requested page preceded by a similar packet of text, called 'HTTP response'. This packet may contain lines requesting the browser to store cookies:
browser
-------
server
The server sends lines of Set-Cookie only if the server wishes the browser to store cookies. Set-Cookie is a directive for the browser to store the cookie and send it back in future requests to the server (subject to expiration time or other cookie attributes), if the browser supports cookies and cookies are enabled. For example, the browser requests the page http://www.example.org/spec.html by sending the server www.example.org a request like the following:
server
This is a request for another page from the same server, and differs from the first one above because it contains the string that the server has previously sent to the browser. This way, the server knows that this request is related to the previous one. The server answers by sending the requested page, possibly adding other cookies as well.
The value of a cookie can be modified by the server by sending a new Set-Cookie: name=newvalue line in response of a page request. The browser then replaces the old value with the new one.
The value of a cookie may consist of any printable ascii character (! through ~, unicode \u0021 through \u007E) excluding , and ; and excluding whitespace. The name of the cookie also excludes = as that is the delimiter between the name and value. The cookie standard RFC2965 is more limiting but not implemented by browsers.
Some of the operations that can be done using cookies can also be done using other mechanisms.

IP address

Some users may be tracked based on the IP address of the computer requesting the page. The server knows the IP address of the computer running the browser or the proxy, if any is used, and could theoretically link a user's session to this IP address.
IP addresses are, generally, not a reliable way to track a session or identify a user. Many computers designed to be used by a single user, such as office PCs or home PCs, are behind a network address translator (NAT). This means that several PCs will share a public IP address. Furthermore, some systems, such as Tor, are designed to retain Internet anonymity, rendering tracking by IP address impractical, impossible, or a security risk.

Setting Cookies

Servers supply cookies by populating the set-cookie response header with the following details:
Name
Name of the cookie
Value
Textual value to be held by the cookie
Expires
Date/time when the cookie should be discarded by the browser.
If this field is empty the cookie expires at the end of the current browser session. This field can also be used to delete a cookie by setting a date/time in the past.
Path
Path below which the cookie should be supplied by the browser.
Domain
Web site domain to which this cookie applies.
This will default to the current domain and attempts to set cookies on other domains are subject to the privacy controls built into the browser.
Cookies are usually small text files, given ID tags that are stored on your computer's browser directory or program data subfolders. Cookies are created when you use your browser to visit
There are two types of cookies: session cookies and persistent cookies. Session cookies are created temporarily in your browser's subfolder while you are visiting a website. Once you leave the site, the session cookie is deleted. 
persistent cookie files remain in your browser's subfolder and are activated again once you visit the website that created that particular cookie. A persistent cookie remains in the browser's subfolder for the duration period set within the cookie's file.

Description: http://support.f5.com/images/assets/bullet.gif
Cookie persistence
Cookie persistence uses an HTTP cookie stored on a clients computer to allow the client to reconnect to the same server previously visited at a web site.
Description: http://support.f5.com/images/assets/bullet.gif
Destination address affinity persistence 
Also known as sticky persistence, destination address affinity persistence supports TCP and UDP protocols, and directs session requests to the same server based solely on the destination IP address of a packet.

Description: http://support.f5.com/images/assets/bullet.gif
Hash persistence
Hash persistence allows you to create a persistence hash based on an existing iRule.
Description: http://support.f5.com/images/assets/bullet.gif
Microsoft® Remote Desktop Protocol persistence
Microsoft® Remote Desktop Protocol (MSRDP) persistence tracks sessions between clients and servers running the Microsoft® Remote Desktop Protocol (RDP) service.

Description: http://support.f5.com/images/assets/bullet.gif
SIP persistence
SIP persistence is a type of persistence used for servers that receive Session Initiation Protocol (SIP) messages sent through UDP, SCTP, or TCP.
Description: http://support.f5.com/images/assets/bullet.gif
Source address affinity persistence
Also known as simple persistence, source address affinity persistence supports TCP and UDP protocols, and directs session requests to the same server based solely on the source IP address of a packet.

Description: http://support.f5.com/images/assets/bullet.gif
SSL persistence
SSL persistence is a type of persistence that tracks non-terminated SSL sessions, using the SSL session ID. To enable persistence for terminated SSL sessions,
Description: http://support.f5.com/images/assets/bullet.gif
Universal persistence
Universal persistence allows you to write an expression that defines what to persist on in a packet. The expression, written using the same expression syntax that you use in iRulesTM, defines some sequence of bytes to use as a session identifier.
You can set up the BIG-IP system to use HTTP cookie persistence. Cookie persistence uses an HTTP cookie stored on a clients computer to allow the client to reconnect to the same pool member previously visited at a web site.
Description: http://support.f5.com/images/assets/bullet.gif
Description: http://support.f5.com/images/assets/bullet.gif

Description: http://support.f5.com/images/assets/bullet.gif
Understanding Cookie profile settings
To implement cookie persistence, you can either use the default cookie profile, or create a custom profile. Table 7.1 shows the settings and values that make up a Cookie profile.


This value isautogenerated based on the pool name.
Sets the expiration time of the cookie. Applies to the HTTP Cookie Insert and HTTP Cookie Rewrite methods only. When using the default (checked), the system uses the expiration time specified in the session cookie.
0
0
This setting applies to the Cookie Hashmethod only. The setting specifies the duration, in seconds, of a persistence entry. For background information on setting timeout values, see Chapter 1, Introducing Local Traffic Management.
Specifies, when enabled (checked), that if the active unit goes into the standby mode, the system mirrors any persistence records to its peer. With respect to Cookie profiles, this setting applies to the Cookie Hash method only.



















1 comment:

  1. bookmarked!!, I love your blog!

    Take a look at my web-site - blogger Finance Photos

    ReplyDelete

Turn off pop notifications in chrome browser from major news outlets

 On Chrome browser, go to settings select privacy and security select site settings select Java Script Select Don't allow sites to use J...