|Frequently Asked Questions|
Compiling The Effective TCP/IP Programming Source Code
The source code is available on my Web Site. After you untar or unzip the code, please do read the README file, as it contains important information about compiling the code on various platforms.
Under Windows, most of the header file type information for networking is included in winsock2.h. UNIX , on the other hand, has several separate header files. The nmake file that comes with the sample code provides dummy header files for Windows users. See the README file that comes with the code for information on how to compile under Windows.
If you’re getting errors such as
simplec.obj : error LNK2001: unresolved external symbol __imp__recv@16
the problem is that the linker can’t locate the winsock library, ws2_32.lib. If you are using the make file included with the source code, this should be handled for you (although it was originally compiled on the now ancient Win95 platform—your milage may vary). If you are using the IDE, you will need to make sure that you explicitly include ws2_32.lib.
The following patch to icmp.c helps on many versions of Windows:
145a146 > struct sockaddr_in local; 153a155,161 > > local.sin_family = AF_INET; > local.sin_port = htons( 0 ); > local.sin_addr.s_addr = htonl( INADDR_ANY ); > rc = bind( s, ( struct sockaddr * )&local, sizeof( local ) ); > if ( rc < 0 ) > error( 1, errno, "bind failed" );
The patch does not appear to be effective on early versions of Windows, such as Win95 and Win98.
My home network, where I write my books and do some limited consulting, comprises six desktop type machines running various flavors of FreeBSD, Linux, Solaris, and Plan 9. I also have two laptops that can connect to the network either through a wireline connection or a wireless access point.
I built all of the computers except the laptops myself. Those who have built their own machines know that “built” is a bit of an overstatement. It’s really just a matter of mounting the parts in a case, and plugging in the cables, but you end up with a better machine that has precisely the features you need at a fraction of the cost.
My main work station, which runs Gentoo Linux, has a gigantic 20.1 inch Samsung SyncMaster 204B LCD monitor, and a wonderful PCKEYBOARD PC-101 keyboard with buckling spring key action. For those old enough to remember, this is the same keyboard that came with the original IBM PC. It has great tactile feedback, and none of the mushy sponge cake feel of modern keyboards. For anyone who puts in serious tube time, I can’t recommend an investment in a big monitor enough. I can have two side by side 80×82 Xterm windows and still have room to spare. Even at that resolution, the typeface is large enough to be read comfortably. No one who uses a computer more than once a week should be without a PCKEYBOARD keyboard.
It’s Fluxbox. Those of you who has been visiting for a while might remember that I used to use KDE, but I’m inclined to like lightweight tools and KDE is anything but lightweight. Fluxbox is simple, fast, configured by text files, and the whole thing compiles in a few minutes instead of the several hours that KDE took. I like my desktop simple and uncluttered, but Fluxbox also works for those who like chrome.
I’ve used FreeBSD since version 1.5, so it was hard for me to switch to using Linux for the bulk of my everyday work. At this point, both Linux and FreeBSD are solid operating systems, but Linux has better support for multimedia, so it makes sense to use it for my work station. My FreeBSD server is still the main work horse on my network, providing DNS, CVS, mail, and other services.
Almost all of my programming is done in C. I have done some projects in C++, but much prefer C. Like most people, I use a variety of scripting languages for small jobs, especially one-offs. I used to use either awk or perl for most of this type of work, but recently I have been using python. I like python a lot—it’s one of the few languages that doesn’t make me want to run screaming back to C. It’s very easy to get started (see this tutorial for a quick introduction), and after six months you can still read your code and understand it. Eric Raymond wrote an interesting article on why he has moved to python for much of his development. I find that his experience with the language parallels mine.
And then there’s Lisp. Unlike many programmers of my generation, I never learned Lisp—I was too busy wallowing around in assembly language. That changed when I stumbled across Paul Graham’s Revenge of the Nerds, an excellent essay on programming languages and their effect on programmer productivity. Graham’s essay inspired me to finally learn Lisp, so I worked through Graham’s book ANSI Common Lisp. I’m glad I did. Lisp, as Graham says, is a dense language that makes it possible to be extraordinarily productive—Graham says most studies he’s seen estimate that one line of Lisp can replace 7–10 lines of C, and that in some cases the ratio is as high as 20 to 1. I find it really useful for experimenting with algorithms and doing interactive data analysis. On the down side, Lisp suffers from a lack of standardized libraries and interfaces to the operating system. Lately, I’ve been using Scheme, a Lisp dialect, a lot. I find Scheme to be a nicer language than Common Lisp, but it does have a much smaller set of standard functions than CL. Nonetheless, Scheme works very well for the type of problems I use it to solve.
So the bottom line is that virtually all my programming is done in C, python, or Lisp/Scheme. For low level communications software, and getting close to the operating system and hardware, it’s hard to beat C. For glue code (especially for providing a simple GUI front end), utilities, and short one-off scripts python offers an ideal solution. For hard problems (especially where standardized libraries aren’t an issue), projects that benefit from interactive programming, and for experimenting with algorithms and potential problem solutions, Lisp delivers tremendous power and flexibility. Any competent programmer should be well versed in all three.
Absolutely. I’ve been using vi and its siblings for so long that I keep trying to scroll my Web browser with the j and k keys. One of the things that I originally liked about vi was that it was lightweight and could load and start almost instantaneously. That’s less true with vim, of course, but almost anything is lightweight compared to emacs. I find that vim has all the capabilities that I actually want in an editor. I have no need or desire to read news or play Towers of Hanoi with my editor, for example.
Most people who dislike vi complain about its modality, but I consider it a feature. I’m a touch typist, and I hate taking my hands off the keyboard, even to use the arrow keys. With vim I can scroll the display and move the cursor directly from the keyboard. At the end of the day, of course, it comes down to a matter of preferences. There is no moral dimension to the choice of an editor.
Certainly. I use groff to typeset all my writing, so I don’t need or want any sort of word processor; especially not a WYSIWYG word processor. I find it almost physically painful to use Word or even OpenOffice. It affects my writing. I keep thinking about how much I hate using the word processor instead of thinking about what I’m writing.
I regard vim/groff as the perfect writing environment. I can use vim’s split screen capability to look at different parts of a manuscript at the same time, or to edit a sample program in one window while I write commentary on it in another. Vim’s syntax highlighting even knows about troff input, so the groff markup in the manuscript stands out, and is easy to see. Having the text as ASCII means that I can run any of the normal UNIX text tools or special scripts of my own on the manuscript. When I’m finished, I use groff to typeset it exactly as I want it.
I’m very fussy about the look of my books and papers, something I learned from Rich Stevens. Tools such as FrameMaker and Word can produce a sort of typeset document, but there are all sorts of little imperfections. Most readers won’t notice them, but they will notice that the book somehow doesn’t look as nice as it should. With groff I can tweak the output right down to the pixel if I need to.
The only other system that I’d consider using is TEX. Actually, I learned TEX before I learned groff, but I prefer working with the groff language even though the typesetter doesn’t always do as nice a job at setting paragraphs as TEX does.
Vim/groff of course. Actually, most of my Web site uses hand constructed HTML, but this FAQ is written in groff using the HTML output option. The HTML feature in groff is still officially alpha quality software, but it’s very usable, don’t you think? I probably won’t bother redoing the old pages in groff, but I’ll certainly use it for any new pages.
I use Firefox for Web browsing, and love it. It’s lightweight enough that it loads in less than a lifetime, but still has all the features that I want in a browser.
For mail, I use mutt. It’s very configurable, integrates nicely with Gnu Privacy Guard, and most importantly I can use vim to compose and edit my email. I retrieve email from my ISP with fetchmail, and then run it through procmail and Eric Raymond’s bogofilter to deal with spam. I use postfix as my mail agent.
For programming, I use the Gnu toolchain almost exclusively. That’s very convenient, because virtually every platform, even Windows, now has gcc and gmake available, so it’s much easier to move between platforms than it used to be.
I keep all my programming projects, these Web pages, and my books under CVS. Recently, I’ve been experimenting with Subversion, but I haven’t moved my archives to it yet. When dealing with many different machines, as I do, CVS can be a huge win. Although I don’t go as far as this guy, I find that I am moving more and more of my files to CVS.
Currently I’m the lead engineer for a company that makes network security devices (firewalls, VPN gateways, etc.), but much of my career was spent in the public safety arena where I built radio network controllers, message switches, and computer aided dispatch systems.
Until fairly recently, public safety agencies, such as the Police, Fire Departments, and Paramedics, used a variety of communication methods to tie their computers together. They had radio data networks for dispatching to vehicles, RS-232 serial connections, bisync, LU 6.2, SNA 3270, and many others. The message switches that I built when I worked in public safety had to deal with all these methods, and route messages between them. A message coming from a vehicle over the radio network might be switched to a dispatching computer over an LU 6.2 line, or to a state law enforcement information database over a bisync line. So working on message switches meant you had to learn all these protocols and how to use them together.
Today, these legacy systems have mostly been replaced by TCP/IP based LANs and WANs, so there’s less need to know the esoteric protocols. Still, in public safety applications you absolutely, positively have to have reliable communications, so I learned which techniques worked and which didn’t.
By reading the code of those who had already mastered them. It’s amazing how much you can learn just by reading the code for a TCP/IP stack, for instance. There’s a huge amount of good (and bad) code available to serve as examples. These days, the hard part is probably separating the good examples from the bad. You can learn to distinguish them by starting with the recognized masters. If you read code written by Dennis Ritchie or Ken Thompson, say, you can be sure you’re looking at a good example. After a while you learn to recognize the quality code even if the authors use widely different techniques and styles.
The same way I learned computer communications: by reading the code of those who had already mastered the language. Before you can do that, you have to know a little bit about the language, of course, so I generally try to find a short introduction to get me started. My introduction to python is a good example. I started by reading the tutorial that I mentioned above. Spending a couple of hours with it gave me enough familiarity with the language that I could begin reading code. In the case of python, much of the library code is written in the Python language, so there is a built in repository of first rate examples. At the same time that I read the example code, I write small programs of my own, just to cement the concepts that I’m learning from the examples. After that, it’s just a matter of using the language regularly so that it becomes second nature.
See above. We have all experienced technical writing that is easy and pleasing to read, and some that is less so. If you want your writing to fall into the first category, study the style and techniques of those who write that way. Not everyone can write as well as Rich Stevens, Don Knuth, or Brian Kernighan, but we can study them and attempt to emulate their clarity and good writing. As with programming, it pays to read those who have mastered the craft, learn their techniques, and integrate them into our own writing.
I learned a lot of the technical details, such as page layout techniques, from Rich Stevens. Most of that information is available to all from Rich’s Web pages on Typesetting. Don Knuth’s The TEX Book has a wealth of information on the technical aspects of producing beautiful documents, whether or not you’re using TEX.
Last updated: $Date: 2006/06/13 16:54:45 $