Selasa, 03 Juni 2014

MTA

The e-mail server operates two separate processes: Mail Transfer Agent (MTA) Mail Delivery Agent (MDA) The Mail Transfer Agent (MTA) process is used to forward e-mail. As shown in the figure, the MTA receives messages from the MUA or from another MTA on another e-mail server. Based on the message header, it determines how a message has to be forwarded to reach its destination. If the mail is addressed to a user whose mailbox is on the local server, the mail is passed to the MDA. If the mail is for a user not on the local server, the MTA routes the e-mail to the MTA on the appropriate server. In the figure, we see that the Mail Delivery Agent (MDA) accepts a piece of e-mail from a Mail Transfer Agent (MTA) and performs the actual delivery. The MDA receives all the inbound mail from the MTA and places it into the appropriate users' mailboxes. The MDA can also resolve final delivery issues, such as virus scanning, spam filtering, and return-receipt handling. Most e-mail communications use the MUA, MTA, and MDA applications. However, there are other alternatives for e-mail delivery. A client may be connected to a corporate e-mail system, such as IBM's Lotus Notes, Novell's Groupwise, or Microsoft's Exchange. These systems often have their own internal e-mail format, and their clients typically communicate with the e-mail server using a proprietary protocol. The server sends or receives e-mail via the Internet through the product's Internet mail gateway, which performs any necessary reformatting. If, for example, two people who work for the same company exchange e-mail with each other using a proprietary protocol, their messages may stay completely within the company's corporate e-mail system. As another alternative, computers that do not have an MUA can still connect to a mail service on a web browser in order to retrieve and send messages in this manner. Some computers may run their own MTA and manage inter-domain e-mail themselves.
As mentioned earlier, e-mail can use the protocols, POP and SMTP (see the figure for an explanation of how they each work). POP and POP3 (Post Office Protocol, version 3) are inbound mail delivery protocols and are typical client/server protocols. They deliver e-mail from the e-mail server to the client (MUA). The MDA listens for when a client connects to a server. Once a connection is established, the server can deliver the e-mail to the client. The Simple Mail Transfer Protocol (SMTP), on the other hand, governs the transfer of outbound e-mail from the sending client to the e-mail server (MDA), as well as the transport of e-mail between e-mail servers (MTA). SMTP enables e-mail to be transported across data networks between different types of server and client software and makes e-mail exchange over the Internet possible. The SMTP protocol message format uses a rigid set of commands and replies. These commands support the procedures used in SMTP, such as session initiation, mail transaction, forwarding mail, verifying mailbox names, expanding mailing lists, and the opening and closing exchanges.

Some of the commands specified in the SMTP protocol are:

HELO - identifies the SMTP client process to the SMTP server process
EHLO - Is a newer version of HELO, which includes services extensions
MAIL FROM - Identifies the sender
RCPT TO - Identifies the recipient
DATA - Identifies the body of the message
READ MORE - MTA
READ MORE - MTA

SMB

The Server Message Block (SMB) is a client/server file sharing protocol. IBM developed Server Message Block (SMB) in the late 1980s to describe the structure of shared network resources, such as directories, files, printers, and serial ports. It is a request-response protocol. Unlike the file sharing supported by FTP, clients establish a long term connection to servers. Once the connection is established, the user of the client can access the resources on the server as if the resource is local to the client host. SMB file-sharing and print services have become the mainstay of Microsoft networking. With the introduction of the Windows 2000 series of software, Microsoft changed the underlying structure for using SMB. In previous versions of Microsoft products, the SMB services used a non-TCP/IP protocol to implement name resolution. Beginning with Windows 2000, all subsequent Microsoft products use DNS naming. This allows TCP/IP protocols to directly support SMB resource sharing, as shown in the figure. The LINUX and UNIX operating systems also provide a method of sharing resources with Microsoft networks using a version of SMB called SAMBA. The Apple Macintosh operating systems also support resource sharing using the SMB protocol.
The SMB protocol describes file system access and how clients can make requests for files. It also describes the SMB protocol inter-process communication. All SMB messages share a common format. This format uses a fixed-sized header followed by a variable-sized parameter and data component. SMB messages can:
READ MORE - SMB
READ MORE - SMB

Linux TCP/IP

The server has now created a socket and is waiting for a connection. The connection procedure in TCP/IP is a so-called “three-way handshake.” First, a client sends a TCP packet with a SYN flag set and no payload (a SYN packet). The server replies by sending a packet with SYN/ACK flags set (a SYN/ACK packet) to acknowledge receipt of the initial packet. The client then sends an ACK packet to acknowledge receipt of the second packet and to finalize the connection procedure. After receiving the SYN/ACK, the packet server wakes up a receiver process while waiting for data. When the three-way handshake is completed, the client starts to send “useful” data to be transferred to the server. Usually, an HTTP request is quite small and fits into a single packet. But in this case, at least four packets will be sent in both directions, adding considerable delay times. Note also that the receiver has already been waiting for the information—since before the data was ever sent.

To alleviate these problems, Linux (along with some other OSs) includes a TCP_DEFER_ACCEPT option in its TCP implementation. Set on a server-side listening socket, it instructs the kernel not to wait for the final ACK packet and not to initiate the process until the first packet of real data has arrived. After sending the SYN/ACK, the server will then wait for a data packet from a client. Now, only three packets will be sent over the network, and the connection establishment delay will be significantly reduced, which is typical for HTTP.

Equivalents of this option are available on other operation systems, as well. For example, in FreeBSD, the same behavior is achieved with the following code:
/* some code skipped for clarity */
struct accept_filter_arg af = { "dataready", "" };
setsockopt(s, SOL_SOCKET, SO_ACCEPTFILTER, &af, sizeof(af));

This feature, called an “accept filter” in FreeBSD, is used in different ways, although in all cases, the effect is the same as TCP_DEFER_ACCEPT—the server will not wait for the final ACK packet, waiting only for a packet carrying a payload. More information about this option and its significance for a high-performance Web server is available in the Apache documentation.

With HTTP client-server interaction, it may be necessary to change client behavior. Why would the client send this “useless” ACK packet anyway? A TCP stack has no way of knowing the status of an ACK packet. If FTP were used instead of HTTP, the client would not send any data until it received a packet with the FTP server prompt. In this case, delayed ACK will cause a delay in a client-server interaction. To decide whether this ACK is necessary, a client should know the application protocol and its current state. Thus, it is necessary to modify client behavior.

For Linux-based clients, we can use another option, which is also called TCP_DEFER_ACCEPT. There are two types of sockets, listening and connected sockets, and two corresponding sets of options. Hence, it is possible to use the same name for these two options that are often used together. After setting this option on a connected socket, the client will not send an ACK packet after receiving a SYN/ACK packet and will instead be waiting for a next request from a user program to send data; therefore, the server will be sent one fewer packet.

TCP_QUICKACK
Another way to prevent delays caused by sending useless packets is to use the TCP_QUICKACK option. This option is different from TCP_DEFER_ACCEPT, as it can be used not only to manage the process of connection establishment, but it can be used also during the normal data transfer process. In addition, it can be set on either side of the client-server connection. Delaying sending of the ACK packet could be useful if it is known that the user data will be sent soon, and it is better to set the ACK flag on that data packet to minimize overhead. When the sender is sure that data will be immediately be sent (multiple packets), the TCP_QUICKACK option can be set to 0. The default value of this option is 1 for sockets in the “connected” state, which will be reset by the kernel to 1 immediately after the first use. (This is a one-time option.)

In another scenario, it could be beneficial to send out the ACK packets. The ACK packet will confirm receipt of a data block, and when the next block is processed, there will be no delay. This mode of data transfer is typical for interactive processes, where the moment of user input is unpredictable. In Linux, this is known as default socket behavior.

In the aforementioned situation, where a client is sending HTTP requests to a server, it is previously known that the request packet is short and should be sent immediately after a connection is established, which is indicative of how HTTP works. There is no need to send a pure ACK packet, so it is possible to set TCP_QUICKACK to 0 to improve performance. On the server side, both options can be set only once on the listening socket. All sockets, created indirectly by an accept call, will inherit all of the options from the original socket.

By combining the TCP_CORK, TCP_DEFER_ACCEPT, and TCP_QUICKACK options, the number of packets participating in each HTTP transaction will be reduced to a minimal acceptable level (as required by TCP protocol requirements and security considerations). The result is not only fast data transfer and request processing but also minimized client-server two-way latency.

Conclusion
Optimization of network program performance is obviously a complex task. Optimization techniques include using the zero-copy approach everywhere possible, proper packet assembling by TCP_CORK and equivalents, minimization of a number of packets transferred, and a latency optimization. For any significant increase in performance and scalability, it is necessary to use all methods available jointly and consistently across the code. Of course, a clear understanding of inner workings of TCP/IP stack and OS as a whole is required, as well.
READ MORE - Linux TCP/IP
READ MORE - Linux TCP/IP

Data Link Layer

Data Link Layer
ARP/RARP Address Resolution Protocol/Reverse Address
DCAP Data Link Switching Client Access Protocol

Network Layer
DHCP Dynamic Host Configuration Protocol
DVMRP Distance Vector Multicast Routing Protocol
ICMP/ICMPv6 Internet Control Message Protocol
IGMP Internet Group Management Protocol
IP Internet Protocol version 4
IPv6 Internet Protocol version 6
MARS Multicast Address Resolution Server
PIM Protocol Independent Multicast-Sparse Mode (PIM-SM)
RIP2 Routing Information Protocol
RIPng for IPv6 Routing Information Protocol for IPv6
RSVP Resource ReSerVation setup Protocol
VRRP Virtual Router Redundancy Protocol

Transport Layer
ISTP
Mobile IP Mobile IP Protocol
RUDP Reliable UDP
TALI Transport Adapter Layer Interface
TCP Transmission Control Protocol
UDP User Datagram Protocol
Van Jacobson compressed TCP
XOT X.25 over TCP

Session Layer
BGMP Border Gateway Multicast Protocol
Diameter
DIS Distributed Interactive Simulation
DNS Domain Name Service
ISAKMP/IKE Internet Security Association and Key Management Protocol and Internet Key Exchange Protocol
iSCSI Small Computer Systems Interface
LDAP Lightweight Directory Access Protocol
MZAP Multicast-Scope Zone Announcement Protocol
NetBIOS/IP NetBIOS/IP for TCP/IP Environment

Application Layer
COPS Common Open Policy Service
FANP Flow Attribute Notification Protocol
Finger User Information Protocol
FTP File Transfer Protocol
HTTP Hypertext Transfer Protocol
IMAP4 Internet Message Access Protocol rev 4
IMPPpre/IMPPmes Instant Messaging and Presence Protocols
IPDC IP Device Control
IRC ·Internet Relay Chat Protocol
ISAKMP Internet Message Access Protocol version 4rev1
ISP
NTP Network Time Protocol
POP3 Post Office Protocol version 3
Radius Remote Authentication Dial In User Service
RLOGIN Remote Login
RTSP Real-time Streaming Protocol
SCTP Stream Control Transmision Protocol
S-HTTP Secure Hypertext Transfer Protocol
SLP Service Location Protocol
SMTP Simple Mail Transfer Protocol
SNMP Simple Network Management Protocol
SOCKS Socket Secure (Server)
TACACS+ Terminal Access Controller Access Control System
TELNET TCP/IP Terminal Emulation Protocol
TFTP Trivial File Transfer Protocol
WCCP Web Cache Coordination Protocol
X-Window X Window

Routing
BGP-4 Border Gateway Protocol
EGP Exterior Gateway Protocol
EIGRP Enhanced Interior Gateway Routing Protocol
HSRP Cisco Hot Standby Router Protocol
IGRP Interior Gateway Routing
NARP NBMA Address Resolution Protocol
NHRP Next Hop Resolution Protocol
OSPF Open Shortest Path First
TRIP Telephony Routing over IP

Tunneling
ATMP Ascend Tunnel Management Protocol
L2F The Layer 2 Forwarding Protocol
L2TP Layer 2 Tunneling Protocol
PPTP Point to Point Tunneling Protocol

Security
AH Authentication Header
ESP Encapsulating Security Payload
TLS Transport Layer Security Protocol
READ MORE - Data Link Layer
READ MORE - Data Link Layer

Routing protocols

Routing protocols An "internet" is a group of interconnected networks. The Internet, on the other hand, is the collection of networks that permits communication between most research institutions, universities, and many other organizations around the world. Routers within the Internet are organized hierarchically. Some routers are used to move information through one particular group of networks under the same administrative authority and control. (Such an entity is called an autonomous system.) Routers used for information exchange within autonomous systems are called interior routers, and they use a variety of interior gateway protocols (IGPs) to accomplish this end. Routers that move information between autonomous systems are called exterior routers; they use the Exterior Gateway Protocol (EGP) or Border Gateway Protocol (BGP).
Routing protocols used with IP are dynamic in nature. Dynamic routing requires the software in the routing devices to calculate routes. Dynamic routing algorithms adapt to changes in the network and automatically select the best routes. In contrast with dynamic routing, static routing calls for routes to be established by the network administrator. Static routes do not change until the network administrator changes them. IP routing tables consist of destination address/next hop pairs. This sample routing table from a Cisco router shows that the first entry is interpreted as meaning "to get to network 34.1.0.0 (subnet 1 on network 34), the next stop is the node at address 54.34.23.12": R6-2500# show ip route Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route Gateway of last resort is not set 34.0.0.0/16 is subnetted, 1 subnets O 34.1.0.0 [110/65] via 54.34.23.12, 00:00:51, Serial0 54.0.0.0/24 is subnetted, 1 subnets C 54.34.23.0 is directly connected, Serial0 R6-2500# As we have seen, IP routing specifies that IP datagrams travel through an internetwork one router hop at a time. The entire route is not known at the outset of the journey. Instead, at each stop, the next router hop is determined by matching the destination address within the datagram with an entry in the current node's routing table. Each node's involvement in the routing process consists only of forwarding packets based on internal information. IP does not provide for error reporting back to the source when routing anomalies occur. This task is left to another Internet protocol—the Internet Control Message Protocol (ICMP). ICMP performs a number of tasks within an IP internetwork. In addition to the principal reason for which it was created (reporting routing failures back to the source), ICMP provides a method for testing node reachability across an internet (the ICMP Echo and Reply messages), a method for increasing routing efficiency (the ICMP Redirect message), a method for informing sources that a datagram has exceeded its allocated time to exist within an internet (the ICMP Time Exceeded message), and other helpful messages. All in all, ICMP is an integral part of any IP implementation, particularly those that run in routers.
READ MORE - Routing protocols
READ MORE - Routing protocols
.::BY JUMBHO MY AT HOME IN THE JEPARA CITY OF BEAUTIFUL::.