2009年11月12日 星期四

How can I implement VLANs across WLAN links?

Virtual LANs (VLANs) are used to subdivide one local area network into several logically isolated broadcast domains, independent of physical topology. The LAN being subdivided into logical pieces can be any type of LAN -- including Ethernet or Wi-Fi.
Port-based VLANs rely on switch or AP configuration to enforce VLAN membership. For example, a switch can be configured to put ports 1 through 8 into VLAN #1 and ports 9 through 16 into VLAN #2. Every station in VLAN #1 will hear the same LAN broadcasts, but nobody in VLAN #2 will be able to do so. Similarly, a wireless AP can be configured to relay traffic to and from VLAN #1 onto a named network (SSID) while relaying traffic to and from VLAN #2 onto a different SSID. That technique is commonly used to segregate guest wireless traffic from other (private) wireless traffic on the wired network.

Alternatively, 802.1Q uses tags (VLAN IDs) carried inside LAN frames to segregate traffic and keep it separated. VLAN tags let 802.1Q-capable devices like switches, APs, routers, and firewalls enforce VLAN segregation along the packet's entire path.

As described above, a wireless AP can be configured to apply a specific VLAN tag to each frame from a particular SSID. Or, wireless APs can receive VLAN tag assignments for each station during 802.1X authentication, supplied by a RADIUS server using RFC 3580. This technique can put individual users into the right VLAN, based on authenticated identity instead of the SSID they connect to.

VLANs can be extended all the way across an enterprise network, from branch office, across the WAN, to headquarters. A VLAN tag does not traverse this entire route because VLANs only apply to local area networks. However, routers and firewalls along the way can be configured to map VLAN tags onto network sub-interfaces.

For example, traffic from VLAN #1 might be routed onto VPN tunnel A as it traverses the Internet, while traffic from VLAN #2 would be routed through VPN tunnel B, etc. Traffic through both VPN tunnels would probably be transmitted over the same WAN link in between locations. In other words, VPN tunnels can keep layer 3 traffic segregated over IP networks, just like VLANs keep layer 2 traffic segregated over LANs.

Good links

http://www.tech-faq.com/mac-address.shtml

2009年11月10日 星期二

Media Access Control

The Media Access Control (MAC) data communication protocol sub-layer, also known as the Medium Access Control, is a sublayer of the Data Link Layer specified in the seven-layer OSI model (layer 2). It provides addressing and channel access control mechanisms that make it possible for several terminals or network nodes to communicate within a multipoint network, typically a local area network (LAN) or metropolitan area network (MAN). The hardware that implements the MAC is referred to as a Medium Access Controller.

The MAC sub-layer acts as an interface between the Logical Link Control (LLC) sublayer and the network's physical layer. The MAC layer emulates a full-duplex logical communication channel in a multipoint network. This channel may provide unicast, multicast or broadcast communication service.




Addressing mechanism

The MAC layer addressing mechanism is called physical address or MAC address. A MAC address is a unique serial number. Once a MAC address has been assigned to a particular piece of network hardware (at time of manufacture), that device should be uniquely identifiable amongst all other network devices in the world. This guarantees that each device in a network will have a different MAC address (analogous to a street address). This makes it possible for data packets to be delivered to a destination within a subnetwork, i.e. a physical network consisting of several network segments interconnected by repeaters, hubs, bridges and switches, but not by IP routers. An IP router may interconnect several subnets.

An example of a physical network is an Ethernet network, perhaps extended by wireless local area network (WLAN) access points and WLAN network adapters, since these share the same 48-bit MAC address hierarchy as Ethernet.

A MAC layer is not required in full-duplex point-to-point communication, but address fields are included in some point-to-point protocols for compatibility reasons.



Channel access control mechanism

The channel access control mechanisms provided by the MAC layer are also known as a multiple access protocol. This makes it possible for several stations connected to the same physical medium to share it. Examples of shared physical media are bus networks, ring networks, hub networks, wireless networks and half-duplex point-to-point links. The multiple access protocol may detect or avoid data packet collisions if a packet mode contention based channel access method is used, or reserve resources to establish a logical channel if a circuit switched or channelization based channel access method is used. The channel access control mechanism relies on a physical layer multiplex scheme.

The most widespread multiple access protocol is the contention based CSMA/CD protocol used in Ethernet networks. This mechanism is only utilized within a network collision domain, for example an Ethernet bus network or a hub network. An Ethernet network may be divided into several collision domains, interconnected by bridges and switches.

A multiple access protocol is not required in a switched full-duplex network, such as today's switched Ethernet networks, but is often available in the equipment for compatibility reasons.

Wireless ad hoc network

A wireless ad hoc network is a decentralized wireless network.[1] The network is ad hoc because it does not rely on a preexisting infrastructure, such as routers in wired networks or access points in managed (infrastructure) wireless networks. Instead, each node participates in routing by forwarding data for other nodes, and so the determination of which nodes forward data is made dynamically based on the network connectivity.

Application
The decentralized nature of wireless ad hoc networks makes them suitable for a variety of applications where central nodes can't be relied on, and may improve the scalability of wireless ad hoc networks compared to wireless managed networks, though theoretical and practical limits to the overall capacity of such networks have been identified.

Minimal configuration and quick deployment make ad hoc networks suitable for emergency situations like natural disasters or military conflicts. The presence of a dynamic and adaptive routing protocol will enable ad hoc networks to be formed quickly.

Medium Access Control
In most wireless ad hoc networks the nodes compete to access the shared wireless medium, often resulting in collisions. Using cooperative wireless communications improves immunity to interference by having the destination node combine self-interference and other-node interference to improve decoding of the desired signal.

802.3 MAC Frame

A data packet on the wire is called a frame. A frame viewed on the actual physical wire would show Preamble and Start Frame Delimiter, in addition to the other data. These are required by all physical hardware.

The table below shows the complete Ethernet frame, as transmitted, for the MTU of 1500 bytes (some implementations of gigabit Ethernet and higher speeds support larger jumbo frames). Note that the bit patterns in the preamble and start of frame delimiter are written as bit strings, with the first bit transmitted on the left (not as byte values, which in Ethernet are transmitted least significant bit first). This notation matches the one used in the IEEE 802.3 standard. One octet is eight bits of data (i.e., a byte on most modern computers).

Wiki Frame format

2009年11月9日 星期一

Quality of service

In the field of computer networking and other packet-switched telecommunication networks, the traffic engineering term quality of service (QoS) refers to resource reservation control mechanisms rather than the achieved service quality. Quality of service is the ability to provide different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow.

For example, a required bit rate, delay, jitter, packet dropping probability and/or bit error rate may be guaranteed. Quality of service guarantees are important if the network capacity is insufficient, especially for real-time streaming multimedia applications such as voice over IP, online games and IP-TV, since these often require fixed bit rate and are delay sensitive, and in networks where the capacity is a limited resource, for example in cellular data communication. In the absence of network congestion, QoS mechanisms are not required.

A network or protocol that supports QoS may agree on a traffic contract with the application software and reserve capacity in the network nodes, for example during a session establishment phase. During the session it may monitor the achieved level of performance, for example the data rate and delay, and dynamically control scheduling priorities in the network nodes. It may release the reserved capacity during a tear down phase.

Dealing with multiple clients

CSMA/CD shared medium Ethernet

Ethernet originally used a shared coaxial cable (the shared medium) winding around a building or campus to every attached machine. A scheme known as carrier sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than the competing token ring or token bus technologies. When a computer wanted to send some information, it used the following algorithm:

Main procedure
1.Frame ready for transmission.
2.Is medium idle? If not, wait until it becomes ready and wait the interframe gap period (9.6 µs in 10 Mbit/s Ethernet).
3.Start transmitting.
4.Did a collision occur? If so, go to collision detected procedure.
5.Reset retransmission counters and end frame transmission.

Collision detected procedure
1.Continue transmission until minimum packet time is reached (jam signal) to ensure that all receivers detect the collision.
2.Increment retransmission counter.
3.Was the maximum number of transmission attempts reached? If so, abort transmission.
4.Calculate and wait random backoff period based on number of collision
5.Re-enter main procedure at stage 1.

This can be likened to what happens at a dinner party, where all the guests talk to each other through a common medium (the air). Before speaking, each guest politely waits for the current speaker to finish. If two guests start speaking at the same time, both stop and wait for short, random periods of time (in Ethernet, this time is generally measured in microseconds). The hope is that by each choosing a random period of time, both guests will not choose the same time to try to speak again, thus avoiding another collision. Exponentially increasing back-off times (determined using the truncated binary exponential backoff algorithm) are used when there is more than one failed attempt to transmit.

Truncated binary exponential backoff

In a variety of computer networks, binary exponential backoff or truncated binary exponential backoff refers to an algorithm used to space out repeated retransmissions of the same block of data.

Examples are the retransmission of frames in carrier sense multiple access with collision avoidance (CSMA/CA) and carrier sense multiple access with collision detection (CSMA/CD) networks, where this algorithm is part of the channel access method used to send data on these network. In Ethernet networks, the algorithm is commonly used to schedule retransmissions after collisions. The retransmission is delayed by an amount of time derived from the slot time and the number of attempts to retransmit.

After i collisions, a random number of slot times between 0 and 2i − 1 is chosen. For the first collision, each sender might wait 0 or 1 slot times. After the second collision, the senders might wait 0, 1, 2, or 3 slot times, and so forth. As the number of retransmission attempts increases, the number of possibilities for delay increases.

The 'truncated' simply means that after a certain number of increases, the exponentiation stops; i.e. the retransmission timeout reaches a ceiling, and thereafter does not increase any further. For example, if the ceiling is set at i=10, then the maximum delay is 1023 slot times.

Because these delays cause other stations who are sending to collide as well, there is a possibility that, on a busy network, hundreds of people may be caught in a single collision set. Because of this possibility, after 16 attempts at transmission, the process is aborted.

Ethernet hub

Hubs classify as Layer 1 devices in the OSI model. At the physical layer, hubs can support little in the way of sophisticated networking. Hubs do not read any of the data passing through them and are not aware of their source or destination. Essentially, a hub simply receives incoming packets, possibly amplifies the electrical signal, and broadcasts these packets out to all devices on the network - including the one that originally sent the packet.

Reference:
Wiki

802.3/Ethernet

Reference:
http://www.dcs.gla.ac.uk/~lewis/networkpages/m04s03EthernetFrame.htm

To send a frame, a station on an 802.3 network first listens to the Ether (carrier sense function). If the Ether is busy, the station defers, but, after the current activity stops, it uses a 1-persistent strategy and will wait only for a short, fixed delay, the inter-frame gap, before beginning to transmit. If there is no collision, the transmission will complete successfully. If, however, a collision is detected, the frame transmission stops and the station begins to send a jamming signal�to make sure that all other stations realise what has happened. The station then backs off for a random time interval before trying again. The back-off�interval is computed using an algorithm called truncated binary exponential backoff, which works as follows.

The station always waits for some multiple of a 51.2ms time interval, known as a slot. The station chooses a number randomly from the set {0,1} and waits for that number of slots. If there is another collision it waits again, but this time for a number chosen from {0,1,2,3}. After k collisions on the same transmission it chooses its number randomly from {0, �, 2k-1}, until k = 10, when the set is frozen. After k = 16, the so-called attempt limit, the MAC unit gives up and reports a failure to the layer above.

2009年11月8日 星期日

Hidden and Exposed Station Problems

We referred to hidden and exposed station problems in the previous section. It is time now to dicuss these problems and their effects.

Hidden Station Problem
Figure 14.10 shows an example of the hidden station problem.

Station B has a transmission range shown by the left oval (sphere in space);
every station in this range can hear any signal transmitted by station B. Station C has a transmission range shown by the right oval (sphere in space); every station located in this range can hear any signal transmitted by C. Station C is outside the transmission range of B; likewise, station B is outside the transmission range of C. Station A,however, is in the area covered by both B and C; it can hear any signal transmitted by B or C.

Assume that station B is sending data to station A. In the middle of this transmission,station C also has data to send to station A. However, station C is out of B’s range and transmissions from B cannot reach C. Therefore C thinks the medium is free.

why does CSMA/CD LAN impose both a minimum and a maximum size frame limit?

The 64 byte (Ethernet) minimum limit is based on the fact that - if you follow the standards for cable lengths and number of hops in the Ethernet segment - one node may not "hear" the transmission of another node until the transmitting node has transmitted the 64th byte of data (if the two nodes are far apart from each other, it will take that long for the signal to propogate down the wire(s)).

In CSMA/CD, the transmitting node is listening for collisions while it transmits it's frame. Once it's finished transmitting the final bit without hearing a collision, it presumes the transmission was successful.

If one node were to transmit a very small frame, it could finish the transmission before a remote node heard the first bits. If the remote node starts to transmit it's own frame (because it hasn't heard the transmission of the first node yet), then there will be a collision, but the first node will no longer be listening for collisions because it finished it's transmission.

In Ethernet, we presume that by the time one node has finished transmitting the 64th byte, that all other nodes will have heard the transmission and will wait before trying to transmit their own data., So we don't normally expect collisions to occur after the 64th byte.

As a matter of fact, if you look at a switch and see counters for "collisions" and "late collisions" - the late collissions are collisions that occurred after the 64th byte was transmitted. Usually the only time where you'd expect to see late collisions is when the network is not cabled properly (cable distances are too long or too many hops), or if there's a duplex mismatch between two devices (one is half duplex - listening to the media before transmitting; and the other is full duplex - transmitting it's data at any time).

As for the maximum frame size, I'm not 100% sure, but I believe it was designed to keep any node from monopolizing the network for too long of a period. If a node has to stop transmitting after 1518 bytes (presuming Ethernet), then it gives other nodes a chance to transmit their data. Another consideration on some technologies is that nodes synchronize their send/receive clocks using the flags (aka prefix, starting delimiter, preamble) that are sent at the beginning and sometimes end of each frame. By keeping the frame size relatively small, these synchronization bits are sent/received more often and will help to keep the nodes' clocks synchronized.

I do know that there are Ethernet switches and NICs that support frames larger than 1518 bytes ("jumbo frames") - if you enable them to do so.

2009年11月1日 星期日

Getting to be a professional of internet marketer

Have you ever thought of preparing yourself to master the internet traffic? Google provides opportunity for users to engage in examination to be a professional internet marketer. The Google Advertising Professional Examination tests you about the knowledge of AdWords, which lets you create simple, effective ads and display them to people already searching online for information related to your business.

There are online learning center providing a thorough and educational material for users to study before the examination. Click here to access it. AdWords Learning Center

The exam is taken by home-based, so you can apply it for any time, any place you are available.
To enter the exam, you have to really spend about at least 2 weeks to study. But also, you have to apply for the client account to demonstrate your skills on attracting a client base for a particular business. However, the account should be maintained within 90 days. And, you have to pay US$1000 each month for the minimum-price per click to Google. The examination fee is $50 each trail with 104 questions given 1 and a half hour. So, altogether the examination costs you US$3000 + US$50 each trial.

Although it seems too expensive for a unversity student, it does benefit your future career. Let see how other people comment on the examination.
1. http://blog.clickfire.com/passed-google-advertising-professional-exam/
2. http://www.seochat.com/c/a/Search-Engine-News/Google-Advertising-Professional-How-I-Did-It-and-Is-It-Worth-It/