The Linux 3Dfx HOWTO Bernd Kreimeier (bk@gamers.org) v1.16, 6 February 1998 This document describes 3Dfx graphics accelerator chip support for Linux. It lists some supported hardware, describes how to configure the drivers, and answers frequently asked questions. ______________________________________________________________________ Table of Contents 1. Introduction 1.1 Contributors and Contacts 1.2 Acknowledgments 1.3 Revision History 1.4 New versions of this document 1.5 Feedback 1.6 Distribution Policy 2. Graphics Accelerator Technology 2.1 Basics 2.2 Hardware configuration 2.3 A bit of Voodoo Graphics (tm) architecture 3. Installation 3.1 Installing the board 3.1.1 Troubleshooting the hardware installation 3.1.2 Configuring the kernel 3.1.3 Configuring devices 3.2 Setting up the Displays 3.2.1 Single screen display solution 3.2.2 Single screen dual cable setup 3.2.3 Dual screen display solution 3.3 Installing the Glide distribution 3.3.1 Using the detect program 3.3.2 Using the test programs 4. Answers To Frequently Asked Questions 5. FAQ: Requirements? 5.1 What are the system requirements? 5.2 Does it work with Linux-Alpha? 5.3 Which 3Dfx chipsets are supported? 5.4 Is the Voodoo Rush (tm) supported? 5.5 Which boards are supported? 5.6 How do boards differ? 5.7 What about AGP? 6. FAQ: Voodoo Graphics (tm)? 3Dfx? 6.1 Who is 3Dfx? 6.2 Who is Quantum3D? 6.3 What is the Voodoo Graphics (tm)? 6.4 What is the Voodoo Rush (tm)? 6.5 What is the Voodoo 2 (tm)? 6.6 What is VGA pass-though? 6.7 What is Texelfx or TMU? 6.8 What is a Pixelfx unit? 6.9 What is SLI mode? 6.10 Is there a single board SLI setup? 6.11 How much memory? How many buffers? 6.12 Does the Voodoo Graphics (tm) do 24 or 32 bit color? 6.13 Does the Voodoo Graphics (tm) store 24 or 32 bit z-buffer per pixel? 6.14 What resolutions does the Voodoo Graphics (tm) support? 6.15 What texture sizes are supported? 6.16 Does the Voodoo Graphics (tm) support paletted textures? 6.17 What about overclocking? 6.18 Where could I get additional info on Voodoo Graphics (tm)? 7. FAQ: Glide? TexUS? 7.1 What is Glide anyway? 7.2 What is TexUS? 7.3 Is Glide freeware? 7.4 Where do I get Glide? 7.5 Is the Glide source available? 7.6 Is Linux Glide supported? 7.7 Where could I post Glide questions? 7.8 Where to send bug reports? 7.9 Who is maintaining it? 7.10 How can I contribute to Linux Glide? 7.11 Do I have to use Glide? 7.12 Should I program using the Glide API? 7.13 What is the Glide current version? 7.14 Does it support multiple Texelfx already? 7.15 Is Linux Glide identical to DOS/Windows Glide? 7.16 Where to I get information on Glide? 7.17 Where to get some Glide demos? 7.18 What is ATB? 8. FAQ: Glide and XFree86? 8.1 Does it run with XFree86? 8.2 Does it only run full screen? 8.3 What is the problem with AT3D/Voodoo Rush (tm) boards? 8.4 What about GLX for XFree86? 8.5 Glide and commerical X Servers? 8.6 Glide and SVGA? 8.7 Glide and GGI? 9. FAQ: OpenGL/Mesa? 9.1 What is OpenGL? 9.2 Where to get additional information on OpenGL? 9.3 Is Glide an OpenGL implementation? 9.4 Is there an OpenGL driver from 3Dfx? 9.5 Is there a commercial OpenGL for Linux and 3Dfx? 9.6 What is Mesa? 9.7 Does Mesa work with 3Dfx? 9.8 How portable is Mesa with Glide? 9.9 Where to get info on Mesa? 9.10 Where to get information on Mesa Voodoo? 9.11 Does Mesa support multitexturing? 9.12 Does Mesa support single pass trilinear mipmapping? 9.13 What is the Mesa "Window Hack"? 9.14 How about GLUT? 10. FAQ: But Quake? 10.1 What about that 3Dfx GL driver for Quake? 10.2 Is there a 3Dfx based glQuake for Linux? 10.3 Does glQuake run in an XFree86 window? 10.4 Known Linux Quake problems? 10.5 Know Linux Quake security problems? 10.6 Does LinuxQuake use multitexturing? 10.7 Where can I get current information on Linux glQuake? 11. FAQ: Troubleshooting? 11.1 Has this hardware been tested? 11.2 Failed to change I/O privilege? 11.3 Does it work without root privilege? 11.4 Displayed images looks awful (single screen)? 11.5 The last frame is still there (single or dual screen)? 11.6 Powersave kicks in (dual screen)? 11.7 My machine seem to lock (X11, single screen)? 11.8 My machine locks (single or dual screen)? 11.9 My machine locks (used with S3 VGA board)? 11.10 No address conflict, but locks anyway? 11.11 Mesa runs, but does not access the board? 11.12 Resetting dual board SLI? 11.13 Resetting single board SLI? ______________________________________________________________________ 1. Introduction This is the Linux 3Dfx HOWTO document. It is intended as a quick reference covering everything you need to know to install and configure 3Dfx support under Linux. Frequently asked questions regarding the 3Dfx support are answered, and references are given to some other sources of information on a variety of topics related to computer generated, hardware accelerated 3D graphics. This information is only valid for Linux on the Intel platform. Some information may be applicable to other processor architectures, but I have no first hand experience or information on this. It is only applicable to boards based on 3Dfx technology, any other graphics accelerator hardware is beyond the scope of this document. 1.1. Contributors and Contacts This document would not have been possible without all the information contributed by other people - those involved in the Linux Glide port and the beta testing process, in the development of Mesa and the Mesa Voodoo drivers, or rewieving the document on behalf of 3Dfx and Quantum3D. Some of them contributed entire sections to this document. Daryll Strauss daryll@harlot.rb.ca.us did the port, Paul J. Metzger pjm@rbd.com modified the Mesa Voodoo driver (written by David Bucciarelli tech.hmw@plus.it) for Linux, Brian Paul brianp@RA.AVID.COM integrated it with his famous Mesa library. With respect to Voodoo Graphics (tm) accelerated Mesa, additional thanks has to go to Henri Fousse, Gary McTaggart, and the maintainer of the 3Dfx Mesa for DOS, Charlie Wallace Charlie.Wallace@unistudios.com. The folks at 3Dfx, notably Gary Sanders, Rod Hughes, and Marty Franz, provided valuable input, as did Ross Q. Smith of Quantum3D. The pages on the Voodoo Extreme and Operation 3Dfx websites provided useful info as well, and in some case I relied on the 3Dfx local Newsgroups. The Linux glQuake2 port that uses Linux Glide and Mesa is maintained by Dave Kirsch zoid@idsoftware.com. Thanks to all those who sent e-mail regarding corrections and updates, and special thanks to Mark Atkinson for reminding me of the dual cable setup. Thanks to the SGML-Tools package (formerly known as Linuxdoc-SGML), this HOWTO is available in several formats, all generated from a common source file. For information on SGML-Tools see its homepage at pobox.com/~cg/sgmltools. 1.2. Acknowledgments 3Dfx, the 3Dfx Interactive logo, Voodoo Graphics (tm), and Voodoo Rush (tm) are registered trademarks of 3Dfx Interactive, Inc. Glide, TexUS, Pixelfx and Texelfx are trademarks of 3Dfx Interactive, Inc. OpenGL is a registered trademark of Silicon Graphics. Obsidian is a trademark of Quantum3D. Other product names are trademarks of the respective holders, and are hereby considered properly acknowledged. 1.3. Revision History Version 1.03 First version for public release. Version 1.16 Current version v1.16 6 February 1998. 1.4. New versions of this document You will find the most recent version of this document at www.gamers.org/dEngine/xf3D/. New versions of this document will be periodically posted to the comp.os.linux.answers newsgroup. They will also be uploaded to various anonymous ftp sites that archive such information including ftp://sunsite.unc.edu/pub/Linux/docs/HOWTO/. Hypertext versions of this and other Linux HOWTOs are available on many World-Wide-Web sites, including sunsite.unc.edu/LDP/. Most Linux CD-ROM distributions include the HOWTOs, often under the /usr/doc/directory, and you can also buy printed copies from several vendors. If you make a translation of this document into another language, let me know and I'll include a reference to it here. 1.5. Feedback I rely on you, the reader, to make this HOWTO useful. If you have any suggestions, corrections, or comments, please send them to me ( bk@gamers.org), and I will try to incorporate them in the next revision. Please add HOWTO 3Dfx to the Subject-line of the mail, so procmail will dump it in the appropriate folder. Before sending bug reports or questions, please read all of the information in this HOWTO, and send detailed information about the problem. If you publish this document on a CD-ROM or in hardcopy form, a complimentary copy would be appreciated. Mail me for my postal address. Also consider making a donation to the Linux Documentation Project to help support free documentation for Linux. Contact the Linux HOWTO coordinator, Tim Bynum (linux-howto@sunsite.unc.edu), for more information. 1.6. Distribution Policy Copyright (c) 1997, 1998 by Bernd Kreimeier. This document may be distributed under the terms set forth in the LDP license at sunsite.unc.edu/LDP/COPYRIGHT.html. This HOWTO is free documentation; you can redistribute it and/or modify it under the terms of the LDP license. This document is distributed in the hope that it will be useful, but without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose. See the LDP license for more details. 2. Graphics Accelerator Technology 2.1. Basics This section gives a very cursory overview of computer graphics accelerator technology, in order to help you understand the concepts used later in the document. You should consult e.g. a book on OpenGL in order to learn more. 2.2. Hardware configuration Graphics accelerators come in different flavors: either as a separate PCI board that is able to pass through the video signal of a (possibly 2D or video accelerated) VGA board, or as a PCI board that does both VGA and 3D graphics (effectively replacing older VGA controllers). The 3Dfx boards based on the Voodoo Graphics (tm) belong to the former category. We will get into this again later. If there is no address conflict, any 3D accelerator board could be present under Linux without interfering, but in order to access the accelerator, you will need a driver. A combined 2D/3D accelerator might behave differently. 2.3. A bit of Voodoo Graphics (tm) architecture Usually, accessing texture memory and frame/depth buffer is a major bottleneck. For each pixel on the screen, there are at least one (nearest), four (bi-linear), or eight (tri-linear mipmapped) read accesses to texture memory, plus a read/write to the depth buffer, and a read/write to frame buffer memory. The Voodoo Graphics (tm) architecture separates texture memory from frame/depth buffer memory by introducing two separate rendering stages, with two corresponding units (Pixelfx and Texelfx), each having a separate memory interface to dedicated memory. This gives an above-average fill rate, paid for restrictions in memory management (e.g. unused framebuffer memory can not be used for texture caching). Moreover, a Voodoo Graphics (tm) could use two TMU's (texture management or texelfx units), and finally, two Voodoo Graphics (tm) could be combined with a mechanism called Scan-Line Interleaving (SLI). SLI essentially means that each Pixelfx unit effectively provides only every other scanline, which decreases bandwidth impact on each Pixelfx' framebuffer memory. 3. Installation Configuring Linux to support 3Dfx accelerators involves the following steps: 1. Installing the board. 2. Installing the Glide distribution. 3. Compiling, linking and/or running the application. The next sections will cover each of these steps in detail. 3.1. Installing the board Follow the manufacturer's instructions for installing the hardware or have your dealer perform the installation. It should not be necessary to select settings for IRQ, DMA channel, either Plug&Pray (tm) or factory defaults should work. The add-on boards described here are memory mapped devices and do not use IRQ's. The only kind of conflict to avoid is memory overlap with other devices. As 3Dfx does not develop or sell any boards, do not contact them on any problems. 3.1.1. Troubleshooting the hardware installation To check the installation and the memory mapping, do cat /proc/pci. The output should contain something like ______________________________________________________________________ Bus 0, device 12, function 0: VGA compatible controller: S3 Inc. Vision 968 (rev 0). Medium devsel. IRQ 11. Non-prefetchable 32 bit memory at 0xf4000000. Bus 0, device 9, function 0: Multimedia video controller: Unknown vendor Unknown device (rev 2). Vendor id=121a. Device id=1. Fast devsel. Fast back-to-back capable. Prefetchable 32 bit memory at 0xfb000000. ______________________________________________________________________ for a Diamond Monster 3D used with a Diamond Stealth-64. Additionally a cat /proc/cpuinfo /proc/meminfo might be helpfull for tracking down conflicts and/or submitting a bug report. With current kernels, you will probably get a boot warning like ______________________________________________________________________ Jun 12 12:31:52 hal kernel: Warning : Unknown PCI device (121a:1). Please read include/linux/pci.h ______________________________________________________________________ which could be safely ignored. If you happen to have a board not very common, or have encountered a new revision, you should take the time to follow the advice in /usr/include/linux/pci.h and send all neces- sary information to linux-pcisupport@cao-vlsi.ibp.fr. If you experience any problems with the board, you should try to verify that DOS and/or Win95 or NT support works. You will probably not receive any useful response from a board manufacturer on a bug report or request regarding Linux. Having dealt with the Diamond support e-mail system, I would not expect useful responses for other operating systems either. 3.1.2. Configuring the kernel There is no kernel configuration necessary, as long as PCI support is enabled. The Linux Kernel HOWTO should be consulted for the details of building a kernel. 3.1.3. Configuring devices The current drivers do not (yet) require any special devices. This is different from other driver developments (e.g. the sound drivers, where you will find a /dev/dsp and /dev/audio). The driver uses the /dev/mem device which should always be available. In consequence, you need to use setuid or root privileges to access the accelerator board. 3.2. Setting up the Displays There are two possible setups with add-on boards. You could either pass-through the video signal from your regular VGA board via the accelerator board to the display, or you could use two displays at the same time. Rely to the manual provided by the board manufacturer for details. Both configurations have been tried with the Monster 3D board. 3.2.1. Single screen display solution This configuration allows you to check basic operations of the accelerator board - if the video signal is not transmitted to the display, hardware failure is possible. Beware that the video output signal might deteoriate significantly if passed through the video board. To a degree, this is inevitable. However, reviews have complained about below-average of the cables provided e.g. with the Monster 3D, and judging from the one I tested, this has not changed. There are other pitfalls in single screen configurations. Switching from the VGA display mode to the accelerated display mode will change resolution and refresh rate as well, even if you are using 640x480 e.g. with X11, too. Moreover, if you are running X11, your application is responsible for demanding all keyboard and mouse events, or you might get stuck because of changed scope and exposure on the X11 display (that is effectively invisible when the accelerated mode is used) You could use SVGA console mode instead of X11. If you are going to use a single screen configuration and switch modes often, remember that your monitor hardware might not enjoy this kind of use. 3.2.2. Single screen dual cable setup Some high end monitors (e.g. the EIZO F-784-T) come with two connectors, one with 5 BNC connectors for RGB, HSync, VSync, the other e.g. a regular VGA or a 13W3 Sub-D VGA. These displays usually also feature a front panel input selector to safely switch from one to the other. It is thus possible to use e.g. a VGA-to-BNC cable with your high end 2D card, and a VGA-to-13W3 Sub-D cable with your 3Dfx, and effectively run dual screen on one display. 3.2.3. Dual screen display solution The accelerator board does not need the VGA input signal. Instead of routing the common video output through the accelerator board, you could attach a second monitor to its output, and use both at the same time. This solution is more expensive, but gives best results, as your main display will still be hires and without the signal quality losses involved in a pass-through solution. In addition, you could use X11 and the accelerated full screen display in parallel, for development and debugging. A common problem is that the accelerator board will not provide any video signal when not used. In consequence, each time the graphics application terminates, the hardware screensave/powersave might kick in depending on your monitors configuration. Again, your hardware might not enjoy being treated like this. You should use ______________________________________________________________________ setenv SST_DUALSCREEN 1 ______________________________________________________________________ to force continued video output in this setup. 3.3. Installing the Glide distribution The Glide driver and library are provided as a single compressed archive. Use tar and gzip to unpack, and follow the instructions in the README and INSTALL accompanying the distribution. Read the install script and run it. Installation puts everything in /usr/local/glide/include,lib,bin and sets the ld.conf to look there. Where it installs and setting ld.conf are independent actions. If you skip the ld.conf step then you need the LD_LIBRARY_PATH. You will need to install the header files in a location available at compile time, if you want to compile your own graphics applications. If you do not want to use the installation as above (i.e. you insist on a different location), make sure that any application could access the shared libary at runtime, or you will get a response like can't load library 'libglide.so'. 3.3.1. Using the detect program There is a bin/detect program in the distribution (the source is not available). You have to run it as root, and you will get something like ______________________________________________________________________ slot vendorId devId baseAddr0 command description ---- -------- ------ ---------- ------- ----------- 00 0x8086 0x122d 0x00000000 0x0006 Intel:430FX (Triton) 07 0x8086 0x122e 0x00000000 0x0007 Intel:ISA bridge 09 0x121a 0x0001 0xfb000008 0x0002 3Dfx:video multimedia adapter 10 0x1000 0x0001 0x0000e401 0x0007 ???:SCSI bus controller 11 0x9004 0x8178 0x0000e001 0x0017 Adaptec:SCSI bus controller 12 0x5333 0x88f0 0xf4000000 0x0083 S3:VGA-compatible display co ______________________________________________________________________ as a result. If you do not have root privileges, the program will bail out with ______________________________________________________________________ Permission denied: Failed to change I/O privilege. Are you root? ______________________________________________________________________ output might come handy for a bug report as well. 3.3.2. Using the test programs Within the Glide distribution, you will find a folder with test programs. Note that these test programs are under 3Dfx copyright, and are legally available for use only if you have purchased a board with a 3Dfx chipset. See the LICENSE file in the distribution, or their web site www.3dfx.com for details. It is recommend to compile and link the test programs even if there happen to be binaries in the distribution. Note that some of the programs will requires some files like alpha.3df from the distribution to be available in the same folder. All test programs use the 640x480 screen resolution. Some will request a veriety of single character inputs, others will just state Press A Key To Begin Test. Beware of loss of input scope if running X11 on the same screen at the same time. See the README.test for a list of programs, and other details. 4. Answers To Frequently Asked Questions The following section answers some of the questions that (will) have been asked on the Usenet news groups and mailing lists. The FAQ has been subdivided into several parts for convenience, namely o FAQ: Requirements? o FAQ: Voodoo Graphics (tm)? 3Dfx? o FAQ: Glide? o FAQ: Glide and SVGA? o FAQ: Glide and XFree86? o FAQ: Glide versus OpenGL/Mesa? o FAQ: But Quake? o FAQ: Troubleshooting? Each section lists several questions and answers, which will hopefully address most problems. 5. FAQ: Requirements? 5.1. What are the system requirements? A Linux PC, PCI 2.1 compliant, a monitor capable of 640x480, and a 3D accelerator board based on the 3Dfx Voodoo Graphics (tm). It will work on a P5 or P6, with or without MMX. The current version does not use MMX, but it has some optimized code paths for P6. At one point, some 3Dfx statements seemed to imply that using Linux Glide required using a RedHat distribution. Note that while Linux Glide has originally been ported in a RedHat 4.1 environment, it has been used and tested with many other Linux distributions, including homebrew, Slackware, and Debian 1.3.1. 5.2. Does it work with Linux-Alpha? There is currently no Linux Glide distribution available for any platform besides i586. As the Glide sources are not available for distribution, you will have to wait for the binary. Quantum3D has DEC Alpha support announced for 2H97. Please contact Daryll Strauss if you are interested in supporting this. There is also the issue of porting the the assembly modules. While there are alternative C paths in the code, the assembly module in Glide (essentially triangle setup) offered significant performance gains depending on the P5 CPU used. 5.3. Which 3Dfx chipsets are supported? Currently, the 3Dfx Voodoo Graphics (tm) chipset is supported under Linux. The Voodoo Rush (tm) chipset is not yet supported. 5.4. Is the Voodoo Rush (tm) supported? The current port of Glide to Linux does not support the Voodoo Rush (tm). An update is in the works. The problem is that at one point the Voodoo Rush (tm) driver code in Glide depended on Direct Draw. There was an SST96 based DOS portion in the library that could theoretically be used for Linux, as soon as all portions residing in the 2D/Direct Draw/D3D combo driver are replaced. Thus Voodoo Rush (tm) based boards like the Hercules Stingray 128/3D or Intergraph Intense Rush are not supported yet. 5.5. Which boards are supported? There are no officially supported boards, as 3Dfx does not sell any boards. This section does not attempt to list all boards, it will just give an overview, and will list only boards that have been found to cause trouble. It is important to recognize that Linux support for a given board does not only require a driver for the 3D accelerator component. If a board features its own VGA core as well, support by either Linux SVGA or XFree86 is required as well (see section about Voodoo Rush (tm) chipset). Currently, an add-on solution is recommended, as it allows you to choose a regular graphics board well supported for Linux. There are other aspects discussed below. All Quantum3D Obsidian boards, independend of texture memory, frame buffer memory, number of Pixelfx and Texelfx units, and SLI should work. Same for all other Voodoo Graphics (tm) based boards, like Orchid Righteous 3D, Canopus Pure 3D, Flash 3D, and Diamond Monster 3D. Voodoo Rush (tm) based boards are not yet supported. Boards that are not based on 3Dfx chipsets (e.g. manufactured by S3, Matrox, 3Dlabs, Videologic) do not work with the 3Dfx drivers and are beyond the scope of this document. 5.6. How do boards differ? As the board manufacturers are using the same chipset, any differences are due to board design. Examples are quality of the pass-through cable and connectors (reportedly, Orchid provided better quality than Diamond), availability of a TV-compliant video signal output (Canopus Pure 3D), and, most notably, memory size on board. Most common were boards for games with 2MB texture cache and 2 MB framebuffer memory, however, the Canopus Pure3D comes with a maximal 4 MB texture cache, which is an advantage e.g. with games using dynamically changed textures, and/or illumation textures (Quake, most notably). The memory architecture of a typical Voodoo Graphics (tm) board is described below, in a separate section. Quantum 3D offers the widest selection of 3Dfx-based boards, and is probably the place to go if you are looking for a high end Voodoo Graphics (tm) based board configuration. Quantum 3D is addressing the visual simulation market, while most of the other vendors are only targetting the consumer-level PC-game market. 5.7. What about AGP? There is no Voodoo Graphics (tm) or Voodoo Rush (tm) AGP board that I am aware of. I am not aware of AGP support under Linux, and I do not know whether upcmong AGP boards using 3Dfx technology might possibly be supported with Linux. 6. FAQ: Voodoo Graphics (tm)? 3Dfx? 6.1. Who is 3Dfx? 3Dfx is a San Jose based manufacturer of 3D graphics accelerator hardware for arcade games, game consoles, and PC boards. Their official website is www.3dfx.com. 3Dfx does not sell any boards, but other companies do, e.g. Quantum3D. 6.2. Who is Quantum3D? Quantum3D started as a 3Dfx spin-off, manufacturing high end accelerator boards based on 3Dfx chip technology for consumer and business market, and supplying arcade game technology. See their home page at www.quantum3d.com for additional information. For general inquiries regarding Quantum3D, please send mail to info@quantum3d. 6.3. What is the Voodoo Graphics (tm)? The Voodoo Graphics (tm) is a chipset manufactured by 3Dfx. It is used in hardware acceleration boards for the PC. See the HOWTO section on supported hardware. 6.4. What is the Voodoo Rush (tm)? The Voodoo Rush (tm) is a derivate of the Voodoo Graphics (tm) that has an interface to cooperate with a 2D VGA video accelerator, effectively supporting accelerated graphics in windows. This combo is currently not supported with Linux. 6.5. What is the Voodoo 2 (tm)? The Voodoo 2 (tm) is the successor of the Voodoo Graphics (tm) chipset, featuring several improvements. It is announced for late March 1998, and annoucements of Voodoo 2 (tm) based boards have been published e.g. by Quantum 3D, by Creative Labs, Orchid Technologies, and Diamond Multimedia. The Voodoo 2 (tm) is supposed to be backwards compatible. However, a new version of Glide will have to be ported to Linux. 6.6. What is VGA pass-though? The Voodoo Graphics (tm) (but not the Voodoo Rush (tm)) boards are add-on boards, meant to be used with a regular 2D VGA video accelerator board. In short, the video output of your regular VGA board is used as input for the Voodoo Graphics (tm) based add-on board, which by default passes it through to the display also connected to the Voodoo Graphics (tm) board. If the Voodoo Graphics (tm) is used (e.g. by a game), it will disconnect the VGA input signal, switch the display to a 640x480 fullscreen mode with the refresh rate configured by SST variables and the application/driver, and generate the video signal itself. The VGA doesn't need to be aware of this, and won't be. This setup has several advantages: free choice of 2D VGA board, which is an issue with Linux, as XFree86 drivers aren't available for all chipsets and revisions, and a cost effective migration path to accelerated 3D graphics. It also has several disadvantages: an application using the Voodoo Graphics (tm) might not re-enable video output when crashing, and regular VGA video signal deteoriates in the the pass-through process. 6.7. What is Texelfx or TMU? Voodoo Graphics (tm) chipsets have two units. The first one interfaces the texture memory on the board, does the texture mapping, and ultimately generates the input for the second unit that interfaces the framebuffer. This one is called Texelfx, aka Texture Management Unit, aka TMU. The neat thing about this is that a board can use two Texelfx instead of only one, like some of the Quantum3D Obsidian boards did, effectively doubling the processing power in some cases, depending on the application. As each Texelfx can address 4MB texture memory, a dual Texelfx setup has an effective texture cache of up to 8MB. This can be true even if only one Texelfx is actually needed by a particular application, as textures can be distributed to both Texelfx, which are used depending on the requested texture. Both Texelfx are used together to perform certain operations as trilinear filtering and illumination texture/lightmap passes (e.g. in glQuake) in a single pass instead of the two passes that are required with only one Texelfx. To actually exploit the theoretically available speedup and cache size increase, a Glide application has to use both Texelfx properly. The two Texelfx can not be used separately to each draw a textured triangle at the same time. A triangle is always drawn using whatever the current setup is, which can be to use both Texelfx for a single pass operation combining two textures, or one Texelfx for only a single texture. Each Texelfx can only access its own memory. 6.8. What is a Pixelfx unit? Voodoo Graphics (tm) chipsets have two units. The second one interfaces the framebuffer and ultimately generates the depth buffer and pixel color updates. This one is called Pixelfx. The neat thing here is that two Pixelfx units can cooperate in SLI mode, like with some of the Quantum3D Obsidian boards, effectively doubling the frame rate. 6.9. What is SLI mode? SLI means "Scanline Interleave". In this mode, two Pixelfx are connected and render in alternate turns, one handling odd, the other handling even scanlines of the actual output. Inthis mode, each Pixelfx stores only half of the image and half of the depth buffer data in its own local framebuffer, effectively doubling the number of pixels. The Pixelfx in question can be on the same board, or on two boards properly connected. Some Quantum3D Obsidian boards support SLI with Voodoo Graphics (tm). As two cards can decode the same PCI addresses and receive the same data, there is not necessarily additional bus bandwidth required by SLI. On the other hand, texture data will have to be replicated on both boards, thus the amount of texture memory effectively stays the same. 6.10. Is there a single board SLI setup? There are now two types of Quantum3D SLI boards. The intial setup used two boards, two PCI slots, and an interconnect (e.g. the Obsidian 100-4440). The later revision which performs identically is contained on one full-length PCI board (e.g. Obsidian 100-4440SB). Thus a single board SLI solution is possible, and has been done. 6.11. How much memory? How many buffers? The most essential difference between different boards using the Voodoo Graphics (tm) chipset is the amount and organization of memory. Quantum3D used a three digit scheme to descibe boards. Here is a slightly modifed one (anticipating Voodoo 2 (tm)). Note that if you use more than one Texelfx, they need the same amount of texture cache memory each, and if you combine two Pixelfx, each needs the same amount of frame buffer memory. ______________________________________________________________________ "SLI / Pixelfx / Texelfx1 / Texelfx2 " ______________________________________________________________________ It means that a common 2MB+2MB board would be a 1/2/2/0 solution, with the minimally required total 4Mb of memory. A Canopus Pure 3D would be 1/2/4/0, or 6MB. An Obsidian-2220 board with two Texelfx would be 1/2/2/2, and an Obsidian SLI-2440 board would be 2/2/4/4. A fully featured dual board solution (2 Pixelfx, each with 2 Texelfx and 4MB frame buffer, each Texelfx 4 MB texture cache) would be 2/4/4/4, and the total amount of memory would be SLI*(Pixelfx+Texelfx1+Texelfx2), or 24 MB. So there. 6.12. Does the Voodoo Graphics (tm) do 24 or 32 bit color? No. The Voodoo Graphics (tm) architecture uses 16bpp internally. This is true for Voodoo Graphics (tm), Voodoo Rush (tm) and Voodoo 2 (tm) alike. Quantum3D claims to implement 22-bpp effective color depth with an enhanced 16-bpp frame buffer, though. 6.13. Does the Voodoo Graphics (tm) store 24 or 32 bit z-buffer per pixel? No. The Voodoo Graphics (tm) architecture uses 16bpp internally for the depth buffer, too. This again is true for Voodoo Graphics (tm), Voodoo Rush (tm) and Voodoo 2 (tm) alike. Again, Quantum3D claims that using the floating point 16-bits per pixel (bpp) depth buffering provides 22-bpp effective Z-buffer precision. 6.14. What resolutions does the Voodoo Graphics (tm) support? The Voodoo Graphics (tm) chipset supports up to 4 MB frame buffer memory. Presuming double buffering and a depth buffer, a 2MB framebuffer will support a resolution of 640x480. With 4 MB frame buffer, 800x600 is possible. Unfortunately 960x720 is not supported. The Voodoo Graphics (tm) chipset requires that the amount of memory for a particular resolution must be such that the vertical and horizontal resolutions must be evenly divisible by 32. The video refresh controller, though can output any particular resolution, but the "virtual" size required for the memory footprint must be in dimensions evenly divisible by 32. So, 960x720 actually requires 960x736 amount of memory, and 960x736x2x3 = 4.04MBytes. However, using two boards with SLI, or a dual Pixelfx SLI board means that each framebuffer will only have to store half of the image. Thus 2 times 4 MB in SLI mode are good up to 1024x768, which is the maximum because of the overall hardware design. You will be able to do 1024x768 tripled buffered with Z, but you will not be able to do e.g. 1280x960 with double buffering. Note that triple buffering (no VSync synchonization required by the application), stereo buffering (for interfacing LCD shutters) and other more demanding setups will severely decrease the available resolution. 6.15. What texture sizes are supported? The maximum texture size for the Voodoo Graphics (tm) chipset is 256x256, and you have to use powers of two. Note that for really small textures (e.g. 16x16) you are better off merging them into a large texture, and adjusting your effective texture coordinates appropriately. 6.16. Does the Voodoo Graphics (tm) support paletted textures? The Voodoo Graphics (tm) hardware and Glide support the palette extension to OpenGL. The most recent version of Mesa does support the GL_EXT_paletted_texture and GL_EXT_shared_texture_palette extensions. 6.17. What about overclocking? If you want to put aside considerations about warranty and overheating, and want to do overclocking to boost up performance even further, there is related info out on the web. The basic mechanism is to use Glide environment variables to adjust the clock. Note that the actual recommended clock is board dependend. While the default clock speed is 50 Mhz, the Diamond Monster 3D property sheet lets you set up a clock of 57 MHz. It all comes down to the design of a specific board, and which components are used with the Voodoo Graphics (tm) chipset - most notably access speed of the RAM in question. If you exceed the limits of your hardware, rendering artifacts will occur to say the least. Reportedly, 57 MHz usually works, while 60 MHz or more is already pushing it. Increasing the clock frequency also means increasing the waste heat disposed in the chips, in a nonlinear dependency (10% increase in frequency means a lot larger increase in heating). In consequence, for permanent overclocking you might want to educate yourself about ways to add cooling fans to the board in a way that does not affect warranty. A very recommendable source is the "3Dfx Voodoo Heat Report" by Eric van Ballegoie, available on the web. 6.18. Where could I get additional info on Voodoo Graphics (tm)? There is a FAQ by 3Dfx, which should be available at their web site. You will find retail information at the following locations: www.3dfx.com and www.quantum3d.com. Inofficial sites that have good info are "Voodoo Extreme" at www.ve3d.com, and "Operation 3Dfx" at www.ve3d.com. 7. FAQ: Glide? TexUS? 7.1. What is Glide anyway? Glide is a proprietary API plus drivers to access 3D graphics accelerator hardware based on chipsets manufactured by 3Dfx. Glide has been developed and implemented for DOS, Windows, and Macintosh, and has been ported to Linux by Daryll Strauss. 7.2. What is TexUS? In the distribution is a libtexus.so, which is the 3Dfx Interactive Texture Utility Software. It is an image processing libary and utility program for preparing images for use with the 3Dfx Interactive Glide library. Features of TexUS include file format conversion, MIPmap creation, and support for 3Dfx Interactive Narrow Channel Compression textures. The TexUS utility program texus reads images in several popular formats (TGA, PPM, RGT), generates MIPmaps, and writes the images as 3Dfx Interactive textures files (see e.g. alpha.3df, as found in the distribution) or as an image file for inspection. For details on the parameters for texus, and the API, see the TexUS documentation. 7.3. Is Glide freeware? Nope. Glide is neither GPL'ed nor subject to any other public license. See LICENSE in the distribution for any details. Effectively, by downloading and using it, you agree to the End User License Agreement (EULA) on the 3Dfx web site. Glide is provided as binary only, and you should neither use nor distribute any files but the ones released to the public, if you have not signed an NDA. The Glide distribution including the test program sources are copyrighted by 3Dfx. The same is true for all the sources in the Glide distribution. In the words of 3Dfx: These are not public domain, but they can be freely distributed to owners of 3Dfx products only. No card, No code! 7.4. Where do I get Glide? The entire 3Dfx SDK is available for download off their public web- site located at www.3dfx.com/software/download_glide.html. Anything else 3Dfx publicly released by 3Dfx is nearby on their website, too. There is also an FTP site, ftp.3dfx.com. The FTP has a longer timeout, and some of the larger files have been broken into 3 files (approx. 3MB each). 7.5. Is the Glide source available? Nope. The Glide source is made available only based on a special agreement and NDA with 3Dfx. 7.6. Is Linux Glide supported? Currently, Linux Glide is unsupported. Basically, it is provided under the same disclaimers as the 3Dfx GL DLL (see below). However, 3Dfx definitely wants to provide as much support as possible, and is in the process of setting up some prerequisites. For the time being, you will have to rely on the 3Dfx newsgroup (see below). In addition, the Quantum3D web page claims that Linux support (for Obsidian) is planned for both Intel and AXP architecture systems in 2H97. 7.7. Where could I post Glide questions? There are newsgroups currently available only on the NNTP server news.3dfx.com run by 3Dfx. This USENET groups are dedicated to 3Dfx and Glide in general, and will mainly provide assistance for DOS, Win95, and NT. The current list includes: ______________________________________________________________________ 3dfx.events 3dfx.games.glquake 3dfx.glide 3dfx.glide.linux 3dfx.products 3dfx.test ______________________________________________________________________ and the 3dfx.oem.products.* group for specific boards, eg. 3dfx.oem.products.quantum3d.obsidian. Please use news.3dfx.com/3dfx.glide.linux for all Lnux Glide related questions. A mailing list dedicated to Linux Glide is in preparation for 1Q98. Send mail to majordomo@gamers.org, no subject, body of the message info linux-3dfx to get information about the posting guidelines, the hypermail archive and how to subscribe to the list or the digest. 7.8. Where to send bug reports? Currently, you should rely on the newsgroup (see above), that is news.3dfx.com/3dfx.glide.linux. There is no official support e-mail set up yet. For questions not specific to Linux Glide, make sure to use the other newsgroups. 7.9. Who is maintaining it? 3Dfx will appoint an official maintainer soon. Currently, inofficial maintainer of the Linux Glide port is Daryll Strauss. Please post bug reports in the newsgroup (above). If you are confident that you found a bug not previously reported, please mail to Daryll at daryll@harlot.rb.ca.us 7.10. How can I contribute to Linux Glide? You could submit precise bug reports. Providing sample programs to be included in the distribution is another possibility. A major contribution would be adding code to the Glide based Mesa Voodoo driver source. See section on Mesa Voodoo below. 7.11. Do I have to use Glide? Yes. As of now, there is no other Voodoo Graphics (tm) driver available for Linux. At the lowest level, Glide is the only interface that talks directly to the hardware. However, you can write OpenGL code without knowing anything about Glide, and use Mesa with the Glide based Mesa Voodoo driver. It helps to be aware of the involvement of Glide for recognizing driver limitations and bugs, though. 7.12. Should I program using the Glide API? That depends on the application you are heading for. Glide is a proprietary API that is partly similar to OpenGL or Mesa, partly contains features only available as EXTensions to some OpenGL implementations, and partly contains features not available anywhere but within Glide. If you want to use the OpenGL API, you will need Mesa (see below). Mesa, namely the Mesa Voodoo driver, offers an API resembling the well documented and widely used OpenGL API. However, the Mesa Voodoo driver is in early alpha, and you will have to accept performance losses and lack of support for some features. In summary, the decision is up to you - if you are heading for maximum performance while accepting potential problems with porting to non-3Dfx hardware, Glide is not a bad choice. If you care about maintenance, OpenGL might be the best bet in the long run. 7.13. What is the Glide current version? The current version of Linux Glide is 2.4. The next version will probably be identical to the current version for DOS/Windows, which is 2.4.3, which comes in two distributions. Right now, various parts of Glide are different for Voodoo Rush (tm) (VR) and Voodoo Graphics (tm) (VG) boards. Thus you have to pick up separate distributions (under Windows) for VR and VG. The same will be true for Linux. There will possibly be another chunk of code and another distribution for Voodoo 2 (tm) (V2) boards. There is also a Glide 3.0 in preparation that will extend the API for use of triangle fans and triangle strips, and provide better state change optimization. Support for fans and strips will in some situations significantly reduce the amount of data sent ber triangle, and the Mesa driver will benefit from this, as the OpenGL API has separate modes for this. For a detailed explanation on this see e.g. the OpenGL documentation. 7.14. Does it support multiple Texelfx already? Multiple Texelfx/TMU's can be used for single pass trilinear mipmapping for improvement image quality without performance penalty in current Linux Glide already. You will need a board with two Texelfx (that is, one of the appropriate Quantum3D Obsidian boards). The application needs to specify the use of both Texelfx accordingly, it does not happen automatically. Note that because most applications are implemented for consumer boards with a single Texelfx, they might not query the presence of a second Texelfx, and thus not use it. This is not a flaw of Glide but of the application. 7.15. Is Linux Glide identical to DOS/Windows Glide? The publicly available version of Linux Glide should be identical to the respective DOS/Windows versions. Delays in releasing the Linux port of newer DOS/Windows releases are possible. 7.16. Where to I get information on Glide? There is exhaustive information available from 3Dfx. You could download it from their home page at www.3dfx.com/software/download_glide.html. These are for free, presuming you bought a 3Dfx hardware based board. Please read the licensing regulations. Basically, you should look for some of the following: o Glide Release Notes o Glide Programming Guide o Glide Reference Manual o Glide Porting Guide o TexUs Texture Utility Software o ATB Release Notes o Installing and Using the Obsidian These are available as Microsoft Word documents, and part of the Windows Glide distribution, i.e. the self-extracting archive file. Postscript copies for separate download should be available at www.3dfx.com as well. Note that the release numbers are not always in sync with those of Glide. 7.17. Where to get some Glide demos? You will find demo sources for Glide within the distribution (test programs), and on the 3Dfx home page. The problem with the latter is that some require ATB. To port these demos to Linux, the event handling has to be completely rewritten. In addition, you might find useful some of the OpenGL demo sources accompanying Mesa and GLUT. While the Glide API is different from the OpenGL API, they target the same hardware rendering pipeline. 7.18. What is ATB? Some of the 3Dfx demo programs for Glide depend not only on Glide but also on 3Dfx's proprietary Arcade Toolbox (ATB), which is available for DOS and Win32, but has not been ported for Linux. If you are a devleoper, the sources are available within the Total Immersion program, so porting ATB to Linux would be possible. 8. FAQ: Glide and XFree86? 8.1. Does it run with XFree86? Basically, the Voodoo Graphics (tm) hardware does not care about X. The X server will not even notice that the video signal generated by the VGA hardware does not reach the display in single screen configurations. If your application is not written X aware, Glide switching to full screen mode might cause problems (see troubleshooting section). If you do not want the overhead of writing an X11-aware application, you might want to use SVGA console mode instead. So yes, it does run with XFree86, but no, it is not cooperating if you don't write your application accordingly. You can use the Mesa "window hack", which will be significantly slower than fullscreen, but still a lot faster than software rendering (see section below). 8.2. Does it only run full screen? See above. The Voodoo Graphics (tm) hardware is not window environment aware, neither is Linux Glide. Again, the experimental Mesa "window hack" covered below will allow for pasting the Voodoo Graphics (tm) board framebuffer's content into an X11 window. 8.3. What is the problem with AT3D/Voodoo Rush (tm) boards? There is an inherent problem when using Voodoo Rush (tm) boards with Linux: Basically, these boards are meant to be VGA 2D/3D accelerator boards, either as a single board solution, or with a Voodoo Rush (tm) based daughterboard used transparently. The VGA component tied to the Voodoo Rush (tm) is a Alliance Semiconductor's ProMotion-AT3D multimedia accelerator. To use this e.g. with XFree86 at all, you need a driver for the AT3D chipset. There is a mailing list on this, and a web site with FAQ at www.frozenwave.com/linux-stingray128. Look there for most current info. There is a SuSE maintained driver at ftp.suse.com/suse_update/special/xat3d.tgz. Reportedly, the XFree86 SVGA server also works, supporting 8, 16 and 32 bpp. Official support will probably be in XFree86 4.0. XFree86 decided to prepare an intermediate XFree86 3.3.2 release as well, which might already address the issues. The following XF86Config settings reportedly work. ______________________________________________________________________ # device section settings Chipset "AT24" Videoram 4032 # videomodes tested by Oliver Schaertel # 25.18 28.32 for 640 x 480 (70hz) # 61.60 for 1024 x 786 (60hz) # 120 for 1280 x 1024 (66hz) ______________________________________________________________________ In summary, there is nothing prohibiting this except for the fact that the drivers in XFree86 are not yet finished. If you want a more technical explanation: Voodoo Rush (tm) support requires X server changes to support grabbing a buffer area in the video memory on the AT3D board, as the Voodoo Rush (tm) based boards need to store their back buffer and z buffer there. This memory allocation and locking requirement is not a 3Dfx specific problem, it is also needed e.g. for support of TV capture cards, and is thus under active development for XFree86. This means changes at the device dependend X level (thus XAA), which are currently implemented as an extension to XFree86 DGA (Direct Graphics Access, an X11 extension proposal implemented in different ways by Sun and XFree86, that is not part of the final X11R6.1 standard and thus not portable). It might be part of an XFree86 GLX implementation later on. The currently distributed X servers assume they have full control of the framebuffer, and use anything that is not used by the visual region of the framebuffer as pixmap cache, e.g. for caching fonts. 8.4. What about GLX for XFree86? There are a couple of problems. The currently supported Voodoo Graphics (tm) hardware and the available revision of Linux Glide are full screen only, and not set up to share a framebuffer with a window environment. Thus GLX or other integration with X11 is not yet possible. The Voodoo Rush (tm) might be capable of cooperating with XFree86 (that is, an SVGA compliant board will work with the XFree86 SVGA server), but it is not yet supported by Linux Glide, nor do S3 or other XFree86 servers support these boards yet. In addition, GLX is tied to OpenGL or, in the Linux case, to Mesa. The XFree86 team is currently working on integrating Mesa with their X Server. GLX is in beta, XFree86 3.3 has the hooks for GLX. See Steve Parker's GLX pages at www.cs.utah.edu/~sparker/xfree86-3d/ for the most recent information. Moreover, there is a joint effort by XFree86 and SuSe, which includes a GLX, see www.suse.de/~sim/. Currently, Mesa still uses its GLX emulation with Linux. 8.5. Glide and commerical X Servers? I have not received any mail regarding use of Glide and/or Mesa with commercial X Servers. I would be interested to get confirmation on this, especially on Mesa and Glide with a commercial X Server that has GLX support. 8.6. Glide and SVGA? You should have no problems running Glide based applications either single or dual screen using VGA modes. It might be a good idea to set up the 640x480 resolution in the SVGA modes, too, if you are using a single screen setup. 8.7. Glide and GGI? A GGI driver for Glide is under development by Jon M. Taylor, but has not officially been released and was put on hold till completion of GGI 0.0.9. For information about GGI see synergy.caltech.edu/~ggi/. If you are adventurous, you might find the combination of XGGI (a GGI based X Server for XFree86) and GGI for Glide an interesting prospect. There is also a GGI driver interfacing the OpenGL API; tested with unaccelerated Mesa. Essentially, this means X11R6 running on a Voodoo Graphics (tm), using either Mesa or Glide directly. 9. FAQ: OpenGL/Mesa? 9.1. What is OpenGL? OpenGL is an immediate mode graphics programming API originally developed by SGI based on their previous proprietary Iris GL, and became in industry standard several years ago. It is defined and maintained by the Architectural Revision Board (ARB), an organization that includes members as SGI, IBM, and DEC, and Microsoft. OpenGL provides a complete feature set for 2D and 3D graphics operations in a pipelined hardware accelerated architecture for triangle and polygon rendering. In a broader sense, OpenGL is a powerful and generic toolset for hardware assisted computer graphics. 9.2. Where to get additional information on OpenGL? The official site for OpenGL maintained by the members of the ARB, is www.opengl.org, A most recommended site is Mark Kilgard's Gateway to OpenGL Info at reality.sgi.com/mjk_asd/opengl-links.html: it provides pointers to book, online manual pages, GLUT, GLE, Mesa, ports to several OS, tons of demos and tools. If you are interested in game programming using OpenGL, there is the OpenGL-GameDev-L@fatcity.com at Listserv@fatcity.com. Be warned, this is a high traffic list with very technical content, and you will probably prefer to use procmail to handle the 100 messages per day coming in. You cut down bandwidth using the SET OpenGL-GameDev-L DIGEST command. It is also not appropriate if you are looking for introductions. The archive is handled by the ListServ software, use the INDEX OpenGL-GameDev-L and GET OpenGL-GameDev-L "filename" commands to get a preview before subscribing. 9.3. Is Glide an OpenGL implementation? No, Glide is a proprietary 3Dfx API which several features specific to the Voodoo Graphics (tm) and Voodoo Rush (tm). A 3Dfx OpenGL is in preparation (see below). Several Glide features would require EXTensions to OpenGL, some of which already found in other implementations (e.g. paletted textures). The closest thing to a hardware accelerated Linux OpenGL you could currently get is Brian Paul's Mesa along with David Bucciarelli's Mesa Voodoo driver (see below). 9.4. Is there an OpenGL driver from 3Dfx? Both the 3Dfx website and the Quantum3D website announced OpenGL for Voodoo Graphics (tm) to be available 4Q97. The driver is currently in Beta, and accessible only to registered deverloper's under written Beta test agreement. A linux port has not been announced yet. 9.5. Is there a commercial OpenGL for Linux and 3Dfx? I am not aware of any third party commercial OpenGL that supports the Voodoo Graphics (tm). Last time I paid attention, neither MetroX nor XInside OpenGL did. 9.6. What is Mesa? Mesa is a free implementation of the OpenGL API, designed and written by Brian Paul, with contributions from many others. Its performance is competitive, and while it is not officially certified, it is an almost fully compliant OpenGL implementation conforming to the ARB specifications - more complete than some commercial products out, actually. 9.7. Does Mesa work with 3Dfx? The latest Mesa MesaVer; release works with Linux Glide 2.4. In fact, support was included in earlier versions, however, this driver is still under development, so be prepared for bugs and less than optimal performance. It is steadily improving, though, and bugs are usually fixed very fast. You will need to get the Mesa library archive from the iris.ssec.wisc.edu FTP site. It is recommended to subscribe to the mailing list as well, especially when trying to track down bugs, hardware, or driver limitations. Make sure to get the most recent distribution. A Mesa-3.0 is in preparation. 9.8. How portable is Mesa with Glide? It is available for Linux and Win32, and any application based on Mesa will only have the usual system specific code, which should usually mean XWindows vs. Windows, or GLX vs. WGL. If you use e.g. GLUT or Qt, you should get away with any system specifics at all for virtually most applications. There are only a few issues (like sampling relative mouse movement) that are not adressed by the available portable GUI toolkits. Mesa/Glide is also available for DOS. The port which is 32bit DOS is maintained by Charlie Wallace and kept up to date with the main Mesa base. See www.geocities.com/~charlie_x/.for the most current releases. 9.9. Where to get info on Mesa? The Mesa home page is at www.ssec.wisc.edu/~brianp/Mesa.html. There is an archive of the Mesa mailing list. at www.iqm.unicamp.br/mesa/. This list is not specific to 3Dfx and Glide, but if you are interested in using 3Dfx hardware to accelerate Mesa, it is a good place to start. 9.10. Where to get information on Mesa Voodoo? For latest information on the Mesa Voodoo driver maintained by David Bucciarelli tech.hmw@plus.it see the home page at www- hmw.caribel.pisa.it/fxmesa/. 9.11. Does Mesa support multitexturing? Not yet (as of Mesa 2.6), but it is on the list. In Mesa you will probably have to use the OpenGL EXT_multitexture extension once it is available. There is no final specification for multitextures in OpenGL, which is supposed to be part of the upcoming OpenGL 1.2 revision. There might be a Glide driver specific implementation of the extension in upcoming Mesa releases, but as long as only certain Quantum3D Obsidian boards come with multiple TMU's, it is not a top priority. This will surely change once Voodoo 2 (tm) based boards are in widespread use. 9.12. Does Mesa support single pass trilinear mipmapping? Multiple TMU's should be used for single pass trilinear mipmapping for improvement image quality without performance penalty in current Linux Glide already. Mesa support is not yet done (as of Mesa 2.6), but is in preparation. 9.13. What is the Mesa "Window Hack"? The most recent revisions of Mesa contain an experimental feature for Linux XFree86. Basically, the GLX emulation used by Mesa copies the contents of the Voodoo Graphics (tm) board's most recently finished framebuffer content into video memory on each glXSwapBuffers call. This feature is also available with Mesa for Windows. This obviously puts some drain on the PCI, doubled by the fact that this uses X11 MIT SHM, not XFree86 DGA to access the video memory. The same approach could theoretically be used with e.g. SVGA. The major benefit is that you could use a Voodoo Graphics (tm) board for accelerated rendering into a window, and that you don't have to use the VGA passthrough mode (video output of the VGA board deteoriates in passing through, which is very visible with high end monitors like e.g. EIZO F784-T). Note that this experimental feature is NOT Voodoo Rush (tm) support by any means. It applies only to the Voodoo Graphics (tm) based boards. Moreover, you need to use a modified GLUT, as interfacing the window management system and handling the events appropriately has to be done by the application, it is not handled in the driver. Make really sure that you have enabled the following environment variables: ______________________________________________________________________ export SST_VGA_PASS=1 # to stop video signal switching export SST_NOSHUTDOWN=1 # to stop video signal switching export MESA_GLX_FX="window" # to initiate Mesa window mode ______________________________________________________________________ If you manage to forget one of the SST variables, your VGA board will be shut off, and you will loose the display (but not the actual X). It is pretty hard to get that back being effectively blind. Finally, note that the libMesaGL.a (or .so) library can contain multiple client interfaces. I.e. the GLX, OSMesa, and fxMesa (and even SVGAMesa) interfaces call all be compiled into the same libMesaGL.a. The client program can use any of them freely, even simultaneously if it's careful. 9.14. How about GLUT? Mark Kilgard's GLUT distribution is a very good place to get sample applications plus a lot of useful utilities. You will find it at reality.sgi.com/mjk_asd/glut3/, and you should get it anyway. The current release is GLUT 3.6, and discussion on a GLUT 3.7 (aka GameGLUT) has begun. Note that Mark Kilgard has left SGI recently, so the archive might move some time this year - for the time being it will be kept at SGI. There is also a GLUT mailing list, glut@perp.com. Send mail to majordomo@perp.com, with the (on of the) following in the body of your email message: ______________________________________________________________________ help info glut subscribe glut end ______________________________________________________________________ As GLUT handles double buffers, windows, events, and other operations closely tied to hardware and operating system, using GLUT with Voodoo Graphics (tm) requires support, which is currently in development within GLX for Mesa. It already works for most cases. 10. FAQ: But Quake? 10.1. What about that 3Dfx GL driver for Quake? The 3Dfx Quake GL, aka mini-driver, aka miniport, aka Game GL, aka 3Dfx GL alpha, implemented only a Quake-specific subset of OpenGL (see http://www.cs.unc.edu/~martin/3dfx.html for an inofficial list of supported code paths). It is not supported, and not updated anymore. It was a Win32 DLL (opengl32.dll) released by 3Dfx and was available for Windows only. This DLL is not, and will not be ported to Linux. 10.2. Is there a 3Dfx based glQuake for Linux? Yes. A Quake linuxquake v0.97 binary has been released based on Mesa with Glide. The Quake2 q2test binary for Linux and Voodoo Graphics (tm) has been made available as well. A full Quake2 for Linux was released in January 1998, with linuxquake2-3.10. Dave "Zoid" Kirsch is the official maintainer of all Linux ports of Quake, Quakeworld, and Quake2, including all the recent Mesa based ports. Note that all Linux ports, including the Mesa based ones, are not officially supported by id Software. See ftp.idsoftware.com/idstuff/quake/unix/ for the latest releases. 10.3. Does glQuake run in an XFree86 window? A revision of Mesa and the Mesa-based Linux glQuake is in preparation. Mesa already does support this by GLX, but Linux glQuake does not use GLX. 10.4. Known Linux Quake problems? Here is an excerpt, as of January 7th, 1998. I omitted most stuff not specific to &3Dfx; hardware. o You really should run Quake2 as root when using the SVGALib and/or GL renders. You don't have to run as root for the X11 refresh, but the modes on the mouse and sound devices must be read/writable by whatever user you run it as. Dedicated server requires no special permissions. o X11 has some garbage on the screen when 'loading'. This is normal in 16bit color mode. X11 doesn't work in 24bit (TrueColor). It would be very slow in any case. o Some people are experiencing crashes with the GL renderer. Make sure you install the libMesa that comes with Quake2! Older versions of libMesa don't work properly. o If you are experience video 'lag' in the GL renderer (the frame rate feels like it's lagging behind your mouse movement) type "gl_finish 1" in the console. This forces update on a per frame basis. o When running the GL renderer, make sure you have killed selection and/or gpm or the mouse won't work as they won't "release" it while Quake2 is running in GL mode. 10.5. Know Linux Quake security problems? As Dave Kirsch posted on January 28th, 1998: an exploit for Quake2 under Linux has been published. Quake2 is using shared libraries. While the READMRE so far does not specifically mention it, note that Quake2 should not be setuid. If you want to use the ref_soft and ref_gl renderers, you should run Quake2 as root. Do not make the binary setuid. You can only run both those renderers at the console only, so being root is not that much of an issue. The X11 render does not need any root permissions (if /dev/dsp is writable by others for sound). The dedicated server mode does not need to be root either, obviously. Problems such as root requirements for games has been sort of a sore spot in Linux for a number of years now. This is one of the goals that e.g. GGI is targetting to fix. A ref_ggi might be supported in the near future. 10.6. Does LinuxQuake use multitexturing? To my understadnding, glQuake will use a multitexture EXTension if the OpenGL driver in question offers it. The current Mesa implementation and the Glide driver for Linux do not yet support this extension, so for the time being the answer is no. See section on Mesa and multitexturing for details. 10.7. Where can I get current information on Linux glQuake? Try some of these sites: the "The Linux Quake Resource" at linuxquake.telefragged.com, or the "Linux Quake Page" at www.planetquake.com/threewave/linux/. Alternatively, you could look for Linux Quake sites in the "SlipgateCentral" database at www.slipgatecentral.com. 11. FAQ: Troubleshooting? 11.1. Has this hardware been tested? See hardware requirements list above. I currently do not maintain a conclusive list of vendors and boards, as no particular board specific problems have been verified. Currently, only 3Dfx and Quantum3D provide boards for testing to the developers, so Quantum3D consumer boards are a safe bet. Every other Voodoo Graphics (tm) based board should work, too. I have reports regarding the Orchid Righteous 3D, Guillemot Maxi 3D Gamer, and Diamond Monster 3D. If you are a board manufacturer who wants to make sure his Voodoo Graphics (tm), Voodoo Rush (tm) or Voodoo 2 (tm) boards work with upcoming releases of Linux, Xfree86, Linux Glide and/or Mesa, please contact me, and I will happily forward your request to the persons maintaining the drivers in question. If you are interested in support for Linux Glide on other then the PC platfrom, e.g. DEC Alpha, please contact the maintainer of Linux Glide Daryll Strauss, at daryll@harlot.rb.ca.us 11.2. Failed to change I/O privilege? You need to be root, or setuid your application to run a Glide based application. For DMA, the driver accesses /dev/mem, which is not writeable for anybody but root, with good reasons. See the README in the Glide distribution for Linux. 11.3. Does it work without root privilege? There are compelling case where the setuid requirement is a problem, obviously. There are currently solutions in preparation, which require changes to the library internals itself. 11.4. Displayed images looks awful (single screen)? If you are using the analog pass through configuration, the common SVGA or X11 display might look pretty bad. You could try to get a better connector cable than the one provided with the accelerator board (the ones delivered with the Diamond Monster 3D are reportedly worse then the one accompanying the Orchid Righteous 3D), but up to a degree there will inevitably be signal loss with an additional transmission added. If the 640x480 full screen image created by the accelerator board does look awful, this might indicate a real hardware problem. You will have to contact the board manufacturer, not 3Dfx for details, as the quality of the video signal has nothing to do with the accelerator - the board manufacturer chooses the RAMDAC, output drivers, and other components responsible. 11.5. The last frame is still there (single or dual screen)? You terminated your application with Ctrl-C, or it did not exit normally. The accelerator board will dutifully provide the current content of the framebuffer as a video signal unless told otherwise. 11.6. Powersave kicks in (dual screen)? When you application terminates in dual screen setups, the accelerator board does not provide video output any longer. Thus powersave kicks each time. To avoid this, use ______________________________________________________________________ setenv SST_DUALSCREEN 1 ______________________________________________________________________ 11.7. My machine seem to lock (X11, single screen)? If you are running X when calling a Glide application, you probably moved the mouse out of the window, and the keyboard inputs do not reach the application anymore. If you application is supposed to run concurrently with X11, it is recommend to expose a full screen window, or use the XGrabPointer and XGrabServer functions to redirect all inputs to the application while the X server cannot access the display. Note that grabbing all input with XGrabPointer and XGrabServer does not qualify as well-behaved application, and that your program might block the entire system. If you experience this problem without running X, be sure that there is no hardware conflict (see below). 11.8. My machine locks (single or dual screen)? If the system definitely does not respond to any inputs (you are running two displays and know about the loss of focus), you might experience a more or less subtle hardware conflict. See installation troubleshooting section for details. If there is no obvious address conflict, there might still be other problems (below). If you are writing your own code the most common reason for locking is that you didn't snap your vertices. See the section on snapping in the Glide documentation. 11.9. My machine locks (used with S3 VGA board)? It is possible you have a problem with memory region overlap specific to S3. There is some info and a patch to the so-called S3 problem in the 3Dfx web site, but these apply to Windows only. To my understanding, the cause of the problem is that some S3 boards (older revisions of Diamond Stealth S3 968) reserve more memory space than actually used, thus the Voodoo Graphics (tm) has to be mapped to a different location. However, this has not been reported as a problem with Linux, and might be Windows-specific. 11.10. No address conflict, but locks anyway? If you happen to use a motherboard with non-standard or incomplete PCI support, you could try to shuffle the boards a bit. I am running an ASUS TP4XE that has that non-standard modified "Media Slot", i.e. PCI slot4 with additional connector for ASUS-manufactured SCSI/Sound combo boards, and I experienced severe problems while running a Diamond Monster 3D in that slot. The system operates flawlessly since I put the board in one of the regular slots. 11.11. Mesa runs, but does not access the board? Be sure that you recompiled all the libraries (including the toolkits the demo programs use - remember that GLUT does not yet support Voodoo Graphics (tm)), and that you removed the older libraries, run ldconfig, and/or set your LD_LIBRARY_PATH properly. Mesa supports several drivers in parallel (you could use X11 SHM, off screen rendering, and Mesa Voodoo at the same time), and you might have to create and switch contexts explicitely (see MakeCurrent function) if the Voodoo Graphics (tm) isn't chosen by default. 11.12. Resetting dual board SLI? If a Quantum 3D Obsidian board using in an SLI setup exits abruptly (i.e., the application crashes, or is aborted by user), the boards are left in an undefined state. With the dual-board set, you can run a program called resetsli to reset them. Until you run the resetsli program, you will not be able to re-initialize the Obsidian board. 11.13. Resetting single board SLI? The resetsli program mentioned above does not yet work with a single board Obsidian SLI (e.g. the Obsidian 100-4440SB). You will have to reboot your system by reset in order to reset the board. 4mb Laptop HOWTO Bruce Richardson 25 March 2000 How to put a "grown-up" Linux on a small-spec (4mb RAM, <=200mb hard disk) laptop. ______________________________________________________________________ Table of Contents 1. Introduction 1.1 Why this document was written. 1.2 What use is a small laptop? 1.3 Why not just upgrade the laptop? 1.4 What about 4mb desktop machines? 1.5 What this document doesn't do. 1.6 Where to find this document. 1.7 Copyright 2. The Laptops 2.1 Basic Specifications 2.1.1 Compaq Contura Aero 2.1.2 Toshiba T1910 2.2 The Problem 2.3 The Solution 3. Choices Made 3.1 What to use to create the initial root partition? 3.2 The Distribution 3.2..1 But I don't like Slackware! 3.3 Which installation method to use? 3.4 Partition Layout 3.4.1 Basic Requirement 3.4.2 How complex a layout? 3.5 Which components to install? 4. The Pre-installation Procedure 4.1 muLinux Preparation 4.2 Prepare the installation root files. 4.3 Create the partitions. 4.3.1 Mini-Linuces and ext2 file-systems - an important note. 4.3.2 Procedure 5. The Installation 5.1 Boot the machine 5.2 Floppy/Parport CD-ROM Install 5.3 Network/PCMCIA Install 5.3.1 PCMCIA install on the Aero 5.4 Set-up 5.4.1 AddSwap 5.4.2 Target 5.4.3 Select 5.4.4 Install 5.4.5 Configure 5.4.6 Exit 5.5 Pre-reboot Configuration 5.6 Post-reboot Configuration. 5.6.1 Re-use the temporary root. 5.6.2 Other configuration tweaks. 6. Conclusion 7. Appendix A: 7.1 A - Base Linux System 7.1..1 Packages considered for omission: 7.1..2 Packages installed: 7.2 AP - Non-X Applications 7.2..1 Packages considered for inclusion: 7.2..2 Packages installed: 7.3 D - Development Tools 7.3..1 Packages installed: 7.4 E - Emacs 7.4..1 Packages installed: 7.5 F - FAQs and HOWTOs 7.5..1 Packages installed: 7.6 K - Kernel Source 7.6..1 Packages Installed: 7.7 N - Networking Tools and Apps 7.7..1 Packages installed: 7.8 Tetex 7.8..1 Packages installed: 7.9 Y - BSD Games Collection 7.9..1 Packages installed: 7.10 End result 8. Appendix B: Resources relevant to this HOWTO ______________________________________________________________________ 1. Introduction 1.1. Why this document was written. I got my hands on two elderly laptops, both with just 4mb RAM and small (<=200mb) hard drives. I wanted to install Linux on them. The documentation for this kind of laptop all recommends installing either a mini-Linux or an old (and therefor compact) version of one of the professional distributions. I wanted to install an up-to-date professional distribution. 1.2. What use is a small laptop? Plenty. It isn't going to run X or be a development box (see ``Which components to install?'') but if you are happy at the console you have a machine that can do e-mail, networking, writing etc. Laptops also make excellent diagnostic/repair tools and the utilities for that will easily fit onto small laptops. 1.3. Why not just upgrade the laptop? Upgrading old laptops is not much cheaper than upgrading new ones. That's a lot to spend on an old machine, especially considering that the manufacturer isn't supporting it any more and spare parts are hard to find. 1.4. What about 4mb desktop machines? The procedure described in this document will work perfectly well on a desktop PC. On the other hand, upgrading a desktop machine is far easier and cheaper than upgrading a laptop. Even if you don't upgrade it, there are still simpler options. You could take out the hard disk, put it in a more powerful machine, install Linux, trim it to fit and then put the disk back in the old machine. 1.5. What this document doesn't do. This document is not a general HOWTO about installing Linux on laptops or even a specific HOWTO for either of the two machines mentioned here. It simply describes a way of squeezing a large Linux into a very small space, citing two specific machines as examples. 1.6. Where to find this document. The latest copy of this document can be found in several formats at http://website.lineone.net/~brichardson/linux/4mb_laptops/. 1.7. Copyright This document is copyright (c) Bruce Richardson 2000. It may be distributed under the terms set forth in the LDP license at sunsite.unc.edu/LDP/COPYRIGHT.html. This HOWTO is free documentation; you can redistribute it and/or modify it under the terms of the LDP license. This document is distributed in the hope that it will be useful, but without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose. See the LDP license for more details. Toshiba and T1910 are trademarks of Toshiba Corporation. Compaq and Contura Aero are trademarks of Compaq Computer Corporation. 2. The Laptops This section describes the laptops that I have used this procedure on, the problems faced when installing Linux on them and the solutions to those problems (in outline). 2.1. Basic Specifications 2.1.1. Compaq Contura Aero · 25MHz 486SX CPU · 4mb RAM · 170mb Hard Disk · 1 PCMCIA Type II slot · External PCMCIA 3.5" Floppy drive (-- The PCMCIA floppy drive has a proprietary interface which is partly handled by the Aero's unique BIOS. The Linux PCMCIA drivers can't work with it. According to the PCMCIA-HOWTO, if the drive is connected when the laptop boots it will work as a standard drive and Card Services will ignore the socket but it is not hot-swappable. However, I found that the drive becomes inaccessible as soon as Card Services start unless there is a mounted disk in the drive. This has implications for the installation process - these are covered at the relevant points. --) 2.1.2. Toshiba T1910 · 33MHz 486SX CPU · 4mb RAM · 200 mb Hard Disk · Internal 3.5" Floppy drive · 1 PCMCIA Type II/III slot 2.2. The Problem The small hard disks and the lack of an internal floppy on the Aero make the installation more tricky than normal but the real problem is the RAM. None of the current distributions has an installation disk that will boot in 4mb, not even if the whole hard disk is a swap partition. The standard installation uses a boot disk to uncompress a root- partition image (either from a second floppy or from CD-ROM) into a ram-disk. The root-image is around 4mb in size. That's all the RAM available in this scenario. Try it and it freezes while unpacking the root-image. 2.3. The Solution The answer is to eliminate the ram-disk. If you can mount root on a physical partition you will have enough memory to do the install. Since the uncompressed ram-disk is too big to fit on a floppy, the only place left is on the hard disk of the laptop. The steps are: 1. Find something that will boot in 4mb ram and which can also create ext2 partitions. 2. Use it to create a swap partition and a small ext2 partition on the laptop's hard disk. 3. Uncompress the installation root-image and copy it onto the ext2 partition. 4. Boot the laptop from the installation boot-disk, pointing it at the ext2 partition on the hard disk. 5. The installation should go more or less as normal from here. The only question was whether a distribution that wouldn't install (under normal circumstances) on the laptops would run on them. The short answer is "Yes". If you're an old Linux hand then that's all you need to know. If not, read on - some of the steps listed above aren't as simple as they look. 3. Choices Made This section describes the choices available, which options are practical, which ones I decided on and why. 3.1. What to use to create the initial root partition? The best tool for this is a mini-Linux. There's a wide selection of small Linuces available on the net, but most of them won't boot in 4mb RAM. I found two that will: SmallLinux http://smalllinux.netpedia.net/ SmallLinux will boot in as little as 2mb RAM but its root disk can't be taken out of the drive, which is a shame since otherwise it has everything we need (i.e. fdisk, mkswap and mkfs.ext2). SmallLinux can create the needed partitions but can't be used to copy the root partition. muLinux http://sunsite.auc.dk/mulinux/ muLinux will boot in 4mb but only in a limited single-user mode. In this mode fdisk and mkswap are available but mkfs.ext2 and the libraries needed to run it are on the /usr partition which is not available in maintenance mode. To use muLinux to do the whole pre-installation procedure the files needed to create ext2 file-systems must be extracted from the usr disk image and copied onto a floppy. This gives the option of either using SmallLinux to create the partitions and muLinux to copy the root partition or using muLinux to do the whole job. Since I had two laptops I tried both. 3.2. The Distribution It didn't take much time to choose Slackware. Apart from the fact that I like it but haven't used it much and want to learn more, I considered the following points: · Slackware has possibly the most low-tech DIY install of all the major distributions. It is also one of the most flexible, coming with a wide range of boot-disk kernels to suit many different machines. This makes it well suited to the kind of hacking about required in this scenario. · Slackware supports all the methods listed in ``Which Installation method to use?''. · Slackware is a distribution designed by one person. I'm sure Patrick Volkerding won't object if I say this means its configuration tools are simpler and more streamlined. In my opinion this makes the job of trimming the installation to fit cramped conditions easier. Version 7.0 was the latest version when I tried this so that's what I used. 3.2.0.1. But I don't like Slackware! You don't have to use it. I can't answer for all the distributions but I know that Debian, Red Hat and SuSE offer a range of installation methods and have an "expert" installation procedure (-- Does Debian do any other kind? --) which can be used here. Most of the steps in this document would apply to any of the distributions without change. If you haven't used the expert method with your preferred distribution before, do a trial run on a simple desktop machine to get the feel of it and to explore the options it offers. 3.3. Which installation method to use? Floppy Install This means churning out 15 floppies - which only gives you an absolute minimal install and requires a second stage to get the apps you want on. It's also very slow on such low-spec machines. This is a last resort if you can't make the others work. Parallel-port Install Where the parallel port has an IDE device, parallel cable or pocket ethernet adaptor (-- A pocket lan adaptor installation onto these machines will be very slow. --) attached. This would be a good choice for the Aero, leaving the PCMCIA slot free to run the floppy drive. PCMCIA Install As above, this could be a CD-ROM or network install. This would be the best method for the T1910 - on the Aero it's a bit more awkward. ISA/PCI Ethernet Install Not an option for the laptops, obviously, but included in case your target machine is a desktop PC. The tools I had to hand dictated a PCMCIA network install. I will point out where steps differ for the other methods. Whichever method you choose, you need to have a higher-spec machine available - even if only to create the disks for a floppy install. 3.4. Partition Layout 3.4.1. Basic Requirement This procedure requires at least two Linux Native partitions in addition to a Swap partition. Since one of the ext2 partitions will be in use as temporary root during the installation it will not be available as a target partition and so should be small - though no smaller than 5mb. It makes sense to create for this a partition that you will re-use as /home after installation is complete. Another option would be to re-create it as a DOS partition to give you a dual boot laptop. 3.4.2. How complex a layout? There isn't room to get too clever here. There is an argument for having a single ext2 partition and using a swap file to avoid wasting space but I would strongly urge creating a separate partition for /usr. If you have only one partition and something goes wrong with it you may well be faced with a complete re-installation. Separating /usr and having a small partition for / makes disaster recovery a more likely prospect. On both machines I created 4 partitions in total: 1. A swap partition -- 16mb on the T1910, 20 on the Aero (I'm more likely to upgrade the memory on the Aero). 2. /home (temporary root during installation) -- 10mb 3. / -- 40mb on the T1910, 30mb on the Aero. 4. /usr -- All the remainder. In addition, the Aero uses hda3 for a 2mb DOS partition containing configuration utilities. See the Aero FAQs for details. 3.5. Which components to install? The full glibc libraries alone would nearly fill the hard disks so there's no question of building a development machine. It looks as if a minimal X installation can be squeezed in but I'm sure it would crawl and I don't want it anyway. I decide to install the following (for a full listing see ``Appendix A''): · The core Linux utilities · Assorted text apps from the ap1 file set: · Info/FAQ/HOWTO documentation · Basic networking utilities · The BSD games This selection matches the kind of machine described in ``What use is a small laptop?''. 4. The Pre-installation Procedure This section covers creating a swap partition and a temporary root partition on the laptop's hard disk. Nothing here is Slackware- specific. 4.1. muLinux Preparation If you are going to use only muLinux to for this procedure then you need to prepare a disk with mkfs.ext2 and supporting libraries on it. From the muLinux setup files uncompress USR.bz2 and mount it as a loop file-system. If you are in the same directory as the USR file and you want to mount it as /tmpusr then the sequence for this is: ______________________________________________________________________ losetup /dev/loop0 USR mount -t ext2 /dev/loop0 /tmpusr ______________________________________________________________________ >From there copy mkfs.ext2, libext2fs.so.2, libcomerr.so.2 and libuuid.so.1 onto a floppy. 4.2. Prepare the installation root files. Select the root disk you want - I used the color one with no problems but the text one would be slightly faster in these low memory conditions. Uncompress the image and mount it as a loop device. The procedure is the same as in the above section but the root disk image is a minix file-system. Next you need 3 1722 floppies or 4 1440 floppies with ext2 file- systems - it's better with 1722 disks as you don't need to split the /lib directory. Give one floppy twice the default number of inodes so it can take the /dev directory. That's 432 nodes for a 1722 disk or 368 for a 1440. If you specify /dev/fd0H1722 or /dev/fd0H1440 then you don't have to give any other parameters so for a 1722 disk do ______________________________________________________________________ mke2fs -N 432 /dev/fd0H1722 ______________________________________________________________________ If you have mounted the root image as /tmproot and the destination floppy as /floppy then cd to /tmproot. To copy the dev directory the command is ______________________________________________________________________ cp -dpPR dev/* /floppy/ ______________________________________________________________________ For the other directories with files in (bin, etc, lib, mnt, sbin, usr, var) it's ______________________________________________________________________ cp -dpPr directoryname/* /floppy/ ______________________________________________________________________ Don't bother with the empty ones (floppy, proc, root, tag, tmp) because you can simply create them on the laptop. boot and cdrom are soft links pointing to /mnt/boot and /var/log/mount respectively - you can also create them on the laptop. 4.3. Create the partitions. 4.3.1. Mini-Linuces and ext2 file-systems - an important note. To save space, small-Linux designers sometimes use older libc5 librariesand where they do use up-to-date libc6 they leave out may of the options compiled into full distributions, including some optional features of the ext2 file-system. This has two consequences: · Trying to mount ext2 disks formatted using a modern Linux system can generate error messages if you mount them read-write. Be sure to use the -r option when mounting floppies on the laptops. · It is not wise to use the mkfs.ext2 that comes with the mini-Linux to create file-systems on the partitions into which SlackWare will be installed. It should only be used to create the file-system on the temporary root partition. Once installation is complete this partition can be reformatted and re-used. 4.3.2. Procedure If installing on an Aero, make sure the floppy drive is inserted before switching on and do not remove it. 1. Boot from the mini-Linux (-- With muLinux, wait until the boot- process complains about the small memory space and offers the option of dropping into a shell - take that option and work in the limited single-user mode it gives you. --) 2. Use fdisk to create the partitions. 3. Reboot on leaving fdisk (with muLinux you may simply have to turn off and on again at this point). 4. Use mkswap on the swap partition and then activate it (this will make muLinux much happier). 5. If using muLinux then mount the extra floppy created in ``muLinux Preparation'', copy mkfs.ext2 into /bin and the libraries into /lib. 6. Use mkfs.ext2 to create the file-system on the temporary root partition. 7. If you have been using SmallLinux, shut down and reboot using muLinux. Don't forget to activate the swap partition again. 8. muLinux will have mounted the boot floppy on /startup - unmount it to free the floppy drive. 9. Now mount the temporary root partition and copy onto it the contents of the disks you created in ``Prepare the installation root files''. Do not be alarmed by the error messages: if, for example, you copy usr from the floppy to the temporary root partition by typing "cp -dpPr usr/* /tmproot/" then you'll get the error message "cp: sr: no such file or directory". Ignore this, nothing is wrong. 10. cd to the temporary root partition and create the empty folders (floppy, proc, root, tag, tmp) and the soft links boot (pointing to mnt/boot) and cdrom (to var/log/mount). 11. Unmount the temporary root partition - this syncs the disk. 12. You can simply turn off the machine now. 5. The Installation This section does not give much detail on the Slackware installation process. In fact, it assumes you are familiar with it. Instead, this section concentrates on those areas where special care or unusual steps are required. 5.1. Boot the machine Make a boot-disk from one of the images. I recommend you use bareapm.i on a laptop and bare.i on a desktop - unless you have a parallel-port IDE device (pportide.i). Boot the laptop from it. When the boot: prompt appears, type "mount root=/dev/hdax" where x is the temporary root partition. Log in as root. Then activate the swap partition. 5.2. Floppy/Parport CD-ROM Install In both these cases, no extra work should be necessary to access the installation media. Simply run setup. 5.3. Network/PCMCIA Install Slackware has supplementary disks with tools for these and instructions for their use greet you when you log in. Use the network disk on a desktop PC with ethernet card or a laptop with pocket ethernet adaptor. Use the PCMCIA disk for PCMCIA install. Once your network adapter/PCMCIA socket has been identified, run setup. 5.3.1. PCMCIA install on the Aero The Slackware installation process runs the PCMCIA drivers from the supplementary floppy. Because the Aero has a PCMCIA floppy drive, this means you can't remove the floppy drive to insert the PCMCIA CD- ROM/ethernet card. The solution is simple: the Slackware PCMCIA setup routine creates /pcmcia and mounts the supplementary disk there, so 1. Create the /pcmcia directory yourself 2. Mount the supplementary disk to /mnt. Be sure to specify the type as vfat - if you don't, it'll be incorrectly identified as UMSDOS and long filenames will be mis-copied. 3. cd /mnt;cp -dpPr ./* /pcmcia/ 4. Unmount the floppy. 5. Run pcmcia. When the script complains that there is no disk in the drive simply hit Enter: Card Sevices will start. Connect your PCMCIA device and hit Enter. 6. Run setup 5.4. Set-up The Slackware set-up program is straightforward. Start with the Keymap section and it'll take you forward step by step. 5.4.1. AddSwap You do need to do this step so it can put the correct entry in fstab but make sure it doesn't run mkswap - you're already using the partition. 5.4.2. Target In this section Slackware asks which partitions will be mounted as what and then formats them if you want. The safest bet here is to leave your temporary root partition out altogether and just edit fstab later once you know you don't need it for it's temporary purpose anymore. If you're going to reuse it as /home then it is OK to designate it as /home - obviously, don't format it now! If you intend to re-use it as a part of the directory structure that will have files placed in it during installation (/var, for example) then you absolutely must ignore it in this step: after the installation is complete you can move the files across. 5.4.3. Select Here you choose which general categories of software to install. I chose as follows: · A - Base Linux System · AP -Non-X applications · F - FAQs and HOWTOs · N - Networking tools and apps · Y - BSD games collection I wouldn't recommend adding to this - if anything, prune it back to A, AP and N. That gives you a core Linux setup to which you can add according to your needs. 5.4.4. Install Choose the Expert installation method. This allows you to select/reject for installation individual packages from the categories you chose in the Selection step. ``Appendix A'' goes through the precise choices I made . This part takes about 3 hours for a PCMCIA network install. You are prompted to select individual packages before the installation of each category, so you can't just walk away and leave it to run through. 5.4.5. Configure Once the packages are all installed, you are prompted to do final configuration for your machine. This covers areas like networking, Lilo, selecting a kernel etc. Some points to look out for: · If you did a PCMCIA install, don't accept the offer to configure your network with netconfig. This will ruin your pcmcia networking. Wait until you've rebooted and then edit /etc/pcmcia/network.opts · This is the point where you should install a kernel. For a laptop the bareapm kernel is best, for a desktop simply the bare one. 5.4.6. Exit The set-up process is finished but you are not. Do not reboot yet! There is another vital step to complete. 5.5. Pre-reboot Configuration On a normal machine you would simply reboot once the installation is complete. If you do that here you may have to wait 6 or 8 hours for a login prompt to appear and another half hour to get to the command prompt. Before rebooting you need to change or remove the elements that cause this slowdown. This involves editing config files so you need to be familiar with vi, ed or sed. At this stage your future root partition is still mounted as /mnt so remember to at that to the paths given here. /etc/passwd Edit this to change root's login shell to ash. ash really is the only practical login shell for 4mb RAM. /etc/rc.d/rc.modules Comment out the line 'depmod -a'. You only need to update module dependencies if you have changed your module configuration (recompiled or added new ones, for example). On a standard system it only takes a second or two and so it doesn't matter that it's needlessly performed each time. On a 4mb laptop it can take as much as 8 hours. When you do change your module set-up you can simply uncomment this line and reboot. Alternatively, change this part of the script so that it will only run if you pass a parameter at the boot-prompt. For example: ________________________________________________________________ if [ "NEWMODULES" == "1" ] ; then depmod -a fi ________________________________________________________________ /etc/rc.d/rc.inet2 This script starts network services like nfs. You probably don't need these and certainly not at start-up. Rename this script to something like RC.inet2 - that will stop it from being run at boot and you can run it manually when you need it. /etc/rc.d/rc.pcmcia On the Aero you should also rename this script, otherwise you'll lose the use of your floppy drive on start-up. It's worth considering for any other small laptop as well - you can always run it manually before inserting a card. Once these changes have been made, you are ready to reboot. 5.6. Post-reboot Configuration. If you made the changes recommended in section ``Pre-reboot configuration'' then the boot process will only take a few minutes, as opposed to several hours. Login as root and check that everything is functioning properly. 5.6.1. Re-use the temporary root. Once you are sure the installation is solid you can reclaim the partition you used as the temporary root. Don't just delete the contents, reformat the filesystem. Remember, the mke2fs that came with the mini-Linux is out of date. If you intend to re-use this partition as /home, remember not to create any user accounts until you have completed this step. 5.6.2. Other configuration tweaks. In such a small RAM space, every little helps. Go through SlackWare's BSD-style init scripts in /etc/rc.d/ and comment out anything you don't need. Have a look at Todd Burgess' Small Memory mini-HOWTO http://eddie.cis.uoguelph.ca/~tburgess/ for more ideas. 6. Conclusion That's it all done. You now have a laptop with the core utilities in place and 50 to 70mb spare for whichever extras you need. Don't mess it up because it's a lot easier to modify an existing installation on such cramped old machines than it is to start from scratch again. 7. Appendix A: This appendix lists which packages (if any) from each category might be included in the installation and gives my reasons for including or omitting them. I made no attempt to install X so those categories are ignored. Although this appendix refers specifically to the Slackware distribution it can be used as a guide with any of the major distributions. 7.1. A - Base Linux System Most of the packages in this category are essential, even those that aren't listed as required by the Slackware set-up program. Because of this, I've listed those packages that I felt could reasonably be left out rather than all the non-compulsory packages that I installed. 7.1.0.1. Packages considered for omission: kernels (ide, scsi etc.) There's no need to install any of these, you get a chance to select a kernel at the very end of the installation process. aoutlibs This is only needed if you intend to run executables compiled in the old a.out format. Omitting it saves a lot of space. Omitted. bash1 Bash2 (simply called bash in the Slackware package list) is required for the Slackware configuration scripts but there are a lot of scripts that need bash1. I included it. getty agetty is Slackware's default getty, this package contains getty and uugetty as alternatives. Only include it if you need their extra functionality. Omitted. gpm Personally, I find this very useful at the console (and the Aero's trackball is very handy) but it's not essential. Included. icbs2 Not needed. Omitted. isapnp No use here. Omitted. loadlin Not needed with the setup described here - unless your old laptop has some peculiarity that requires a DOS driver to initialise some of its devices. Omitted. lpr You could argue that you can do your printing from whichever desktop is nearest but I always find it useful to be have printing capabilities on a laptop. Included. minicom Not a compulsory include but I want the laptop to do dial-up connection. Very handy. Included. pciutils Not needed on these old laptops. Omitted. quota Not vital but it can be used to set limits that stop you from overflowing the limited space available in these laptops. Included. tcsh I recommend using ash as your login shell. Only include this if you need it for scripts. Omitted. umsprogs You can leave this out and still be able to access UMSDOS floppies. Omitted. scsimods No use on these laptops. Omitted. sysklogd This can interfere with apmd but it does provide essential information. Included. 7.1.0.2. Packages installed: aaa_base, bash, bash1, bin, bzip2, cpio, cxxlibs, devs, e2fsprog, elflibs, elvis, etc, fileutils, find, floppy, fsmods, glibcso, gpm, grep, gzip, hdsetup, infozip, kbd, ldso, less, lilo, man, modules, modutils, pcmcia, sh_utils, shadow, sudo, sysklogd, sysvinit, tar, txtutils, util, zoneinfo Combined size: 33.4 7.2. AP - Non-X Applications None of these packages are, strictly speaking, essential - although ash is really required for sensible operation in 4mb. Leaving them all out could save the vital space for you to squeeze in your favourite app. I selected a minimal set of tools that I don't like to do without. 7.2.0.1. Packages considered for inclusion: apsfilter Not much point having printing if you can only print text files. Included. ash This is the shell for low-memory machines, only taking up 60k. Use it as the default login shell unless you like waiting 10 seconds for the command prompt to reappear each time. Included. editors (jed, joe jove vim) elvis is the default Slackware editor and a required part of the installation. If, like me, you are a vi fan then that's all you need: installing vim would be wasteful duplication given the space restrictions. If you can't stand vi and need a more DOS- style editor then joe is small. Emacs fans with some self- discipline might consider jed or jove rather than pigging out on the full-size beast. Omitted. enscript If you already have apsfilter you don't really need this. Omitted. ghostscript Including the fonts this comes to about 7.5mb. One to leave until after the core installation, then consider if you need it. Omitted. groff Needed for the man pages. Included. ispell Not an essential butvery useful to the overenthusiastic touch- typist. included. manpages Included! mc Slackware offers a lightweight compilation of mc but I'm happier at the command prompt. Omitted. quota Not necessary on what is not a multi-user machine but you may,like me, find it handy to stop you from forgetfully wasting the little space you have. Included. rpm Don't bother. If you do have an rpm that you would like to squeeze in, use rpm2tgz on a desktop machine to turn it into a tgz package - then you can use the standard Slackware installation tools. Omitted. sc A useful little spreadsheet packed very small. Included. sudo Not essential but I find it useful here: it's a cramped environment and an awkward reinstall if you mess things up - sudo helps create user profiles with the power to do the things you need without carelessly wiping your disk. Included. texinfo Info documentation. Included. zsh Leave this out unless you're addicted to it or have scripts that must use it. Omitted. 7.2.0.2. Packages installed: apsfilter,ash, diff, groff, ispell, manpages, quota, sc, sudo, texinfo Combined size: 8.1 mb 7.3. D - Development Tools You could fit C or C++ into this space but the glibc library package is too big, so some pruning would be needed. Do the main installation first and then try it. There is room for Perl and Python. 7.3.0.1. Packages installed: None 7.4. E - Emacs I don't use Emacs and so saved myself some space. On the other hand, if you are an Emacs fan then you probably use it for e-mail, news and coding so you'll claim some of that space back by omitting other packages. If you do want Emacs it might be an idea to leave this out while doing the core installation. Once the laptop is up you can try fitting in what you want/need at your leisure. 7.4.0.1. Packages installed: None. 7.5. F - FAQs and HOWTOs If you know it all you don't need these. I installed the lot. 7.5.0.1. Packages installed: howto, manyfaqs, mini Combined size: 12.4 mb 7.6. K - Kernel Source You can just squeeze it in. If all you want to do is read the source, go ahead. 7.6.0.1. Packages Installed: None 7.7. N - Networking Tools and Apps These packages were selected to provide core networking tools, dial-up capability, e-mail, web and news. 7.7.0.1. Packages installed: dip, elm, fetchmail, mailx, lynx, netmods, netpipes, ppp, procmail, trn, tcpip1, tcpip2, uucp, wget Combined size: 15.1 mb 7.8. Tetex Another set that will barely squeeze in. I can't say how it would run in the space available. 7.8.0.1. Packages installed: None 7.9. Y - BSD Games Collection I'm addicted to several of these. If I really need that last 5mb they can go. 7.9.0.1. Packages installed: bsdgames Combined size: 5.4 mb 7.10. End result In total the installed packages plus kernel took up about 75mb of disk space of which 19.5mb was in the root partition and 55.5 in /usr. On the Aero that left 39mb in /usr, 74mb on the T1910. 8. Appendix B: Resources relevant to this HOWTO Linux Laptop HOWTO http://www.snafu.de/~wehe/Laptop-HOWTO.html Small Memory mini-HOWTO http://eddie.cis.uoguelph.ca/~tburgess/ Linux on Laptops http://www.cs.utexas.edu/users/kharker/linux-laptop/ HOWTOs and installation FAQs for a wide range of machines. Linux T1910 FAQ http://members.tripod.com/~Cyberpvnk/linux.htm Linux Contura Aero FAQ http://domen.uninett.no/~hta/linux/aero-faq.html Contura Aero FAQ http://www.reed.edu/~pwilk/aero/aero.faq Comprehensive FAQ on all aspects of the Contura Aero compiled by the moderators of the Aero mailing list. Good Linux section . GNU/Linux AI & Alife HOWTO by John Eikenberry v1.4, 23 June 2000 This howto mainly contains information about, and links to, various AI related software libraries, applications, etc. that work on the GNU/Linux platform. All of it is (at least) free for personal use. The new master page for this document is http://zhar.net/gnu-linux/howto/ ______________________________________________________________________ Table of Contents 1. Introduction 1.1 Purpose 1.2 Where to find this software 1.3 Updates and comments 1.4 Copyright/License 2. Traditional Artificial Intelligence 2.1 AI class/code libraries 2.2 AI software kits, applications, etc. 3. Connectionism 3.1 Connectionist class/code libraries 3.2 Connectionist software kits/applications 4. Evolutionary Computing 4.1 EC class/code libraries 4.2 EC software kits/applications 5. Alife & Complex Systems 5.1 Alife & CS class/code libraries 5.2 Alife & CS software kits, applications, etc. 6. Autonomous Agents 7. Programming languages ______________________________________________________________________ 1. Introduction 1.1. Purpose The GNU/Linux OS has evolved from its origins in hackerdom to a full blown UNIX, capable of rivaling any commercial UNIX. It now provides an inexpensive base to build a great workstation. It has shed its hardware dependencies, having been ported to DEC Alphas, Sparcs, PowerPCs, and many others. This potential speed boost along with its networking support will make it great for workstation clusters. As a workstation it allows for all sorts of research and development, including artificial intelligence and artificial life. The purpose of this Mini-Howto is to provide a source to find out about various software packages, code libraries, and anything else that will help someone get started working with (and find resources for) artificial intelligence, artificial life, etc. All done with GNU/Linux specifically in mind. 1.2. Where to find this software All this software should be available via the net (ftp || http). The links to where to find it will be provided in the description of each package. There will also be plenty of software not covered on these pages (which is usually platform independent) located on one of the resources listed on the links section of the Master Site (given above). 1.3. Updates and comments If you find any mistakes, know of updates to one of the items below, or have problems compiling and of the applications, please mail me at: jae@NOSPAM-zhar.net and I'll see what I can do. If you know of any AI/Alife applications, class libraries, etc. Please email me about them. Include your name, ftp and/or http sites where they can be found, plus a brief overview/commentary on the software (this info would make things a lot easier on me... but don't feel obligated ;). I know that keeping this list up to date and expanding it will take quite a bit of work. So please be patient (I do have other projects). I hope you will find this document helpful. 1.4. Copyright/License Copyright (c) 1996-2000 John A. Eikenberry LICENSE This document may be reproduced and distributed in whole or in part, in any medium physical or electronic, provided that this license notice is displayed in the reproduction. Commercial redistribution is permitted and encouraged. Thirty days advance notice, via email to the author, of redistribution is appreciated, to give the authors time to provide updated documents. A. REQUIREMENTS OF MODIFIED WORKS All modified documents, including translations, anthologies, and partial documents, must meet the following requirements: · The modified version must be labeled as such. · The person making the modifications must be identified. · Acknowledgement of the original author must be retained. · The location of the original unmodified document be identified. · The original author's name(s) may not be used to assert or imply endorsement of the resulting document without the original author's permission. In addition it is requested (not required) that: · The modifications (including deletions) be noted. · The author be notified by email of the modification in advance of redistribution, if an email address is provided in the document. As a special exception, anthologies of LDP documents may include a single copy of these license terms in a conspicuous location within the anthology and replace other copies of this license with a reference to the single copy of the license without the document being considered "modified" for the purposes of this section. Mere aggregation of LDP documents with other documents or programs on the same media shall not cause this license to apply to those other works. All translations, derivative documents, or modified documents that incorporate this document may not have more restrictive license terms than these, except that you may require distributors to make the resulting document available in source format. 2. Traditional Artificial Intelligence Traditional AI is based around the ideas of logic, rule systems, linguistics, and the concept of rationality. At its roots are programming languages such as Lisp and Prolog. Expert systems are the largest successful example of this paradigm. An expert system consists of a detailed knowledge base and a complex rule system to utilize it. Such systems have been used for such things as medical diagnosis support and credit checking systems. 2.1. AI class/code libraries These are libraries of code or classes for use in programming within the artificial intelligence field. They are not meant as stand alone applications, but rather as tools for building your own applications. ACL2 · Web site: www.telent.net/cliki/ACL2 ACL2 (A Computational Logic for Applicative Common Lisp) is a theorem prover for industrial applications. It is both a mathematical logic and a system of tools for constructing proofs in the logic. ACL2 works with GCL (GNU Common Lisp). AI Search II · WEB site: www.bell-labs.com/topic/books/ooai-book/ Submitted by: Peter M. Bouthoorn Basically, the library offers the programmer a set of search algorithms that may be used to solve all kind of different problems. The idea is that when developing problem solving software the programmer should be able to concentrate on the representation of the problem to be solved and should not need to bother with the implementation of the search algorithm that will be used to actually conduct the search. This idea has been realized by the implementation of a set of search classes that may be incorporated in other software through C++'s features of derivation and inheritance. The following search algorithms have been implemented: - depth-first tree and graph search. - breadth-first tree and graph search. - uniform-cost tree and graph search. - best- first search. - bidirectional depth-first tree and graph search. - bidirectional breadth-first tree and graph search. - AND/OR depth tree search. - AND/OR breadth tree search. This library has a corresponding book, "Object-Oriented Artificial Instelligence, Using C++". Chess In Lisp (CIL) · FTP site: chess.onenet.net/pub/chess/uploads/projects/ The CIL (Chess In Lisp) foundation is a Common Lisp implementaion of all the core functions needed for development of chess applications. The main purpose of the CIL project is to get AI researchers interested in using Lisp to work in the chess domain. DAI · Web site: starship.skyport.net/crew/gandalf/DNET/AI A library for the Python programming language that provides an object oriented interface to the CLIPS expert system tool. It includes an interface to COOL (CLIPS Object Oriented Language) that allows: · Investigate COOL classes · Create and manipulate with COOL instances · Manipulate with COOL message-handler's · Manipulate with Modules Nyquist · Web site: www.cs.cmu.edu/afs/cs.cmu.edu/project/music/web/music.html The Computer Music Project at CMU is developing computer music and interactive performance technology to enhance human musical experience and creativity. This interdisciplinary effort draws on Music Theory, Cognitive Science, Artificial Intelligence and Machine Learning, Human Computer Interaction, Real-Time Systems, Computer Graphics and Animation, Multimedia, Programming Languages, and Signal Processing. A paradigmatic example of these interdisciplinary efforts is the creation of interactive performances that couple human musical improvisation with intelligent computer agents in real-time. PDKB · Web site: lynx.eaze.net/~pdkb/web/ · SourceForge site: sourceforge.net/project/?group_id=1449 Public Domain Knowledge Bank (PDKB) is an Artificial Intelligence Knowledge Bank of common sense rules and facts. It is based on the Cyc Upper Ontology and the MELD language. Python Fuzzy Logic Module · FTP site: ftp://ftp.csh.rit.edu/pub/members/retrev/ A simple python module for fuzzy logic. The file is 'fuz.tar.gz' in this directory. The author plans to also write a simple genetic algorithm and a neural net library as well. Check the 00_index file in this directory for release info. Screamer · Web site: www.cis.upenn.edu/~screamer-tools/home.html Screamer is an extension of Common Lisp that adds support for nondeterministic programming. Screamer consists of two levels. The basic nondeterministic level adds support for backtracking and undoable side effects. On top of this nondeterministic substrate, Screamer provides a comprehensive constraint programming language in which one can formulate and solve mixed systems of numeric and symbolic constraints. Together, these two levels augment Common Lisp with practically all of the functionality of both Prolog and constraint logic programming languages such as CHiP and CLP(R). Furthermore, Screamer is fully integrated with Common Lisp. Screamer programs can coexist and interoperate with other extensions to Common Lisp such as CLOS, CLIM and Iterate. ThoughtTreasure · Web site: www.signiform.com/tt/htm/tt.htm ThoughtTreasure is a project to create a database of commonsense rules for use in any application. It consists of a database of a little over 100K rules and a C API to integrate it with your applications. Python, Perl, Java and TCL wrappers are already available. 2.2. AI software kits, applications, etc. These are various applications, software kits, etc. meant for research in the field of artificial intelligence. Their ease of use will vary, as they were designed to meet some particular research interest more than as an easy to use commercial package. ASA - Adaptive Simulated Annealing · Web site: www.ingber.com/#ASA-CODE · FTP site: ftp.ingber.com/ ASA (Adaptive Simulated Annealing) is a powerful global optimization C-code algorithm especially useful for nonlinear and/or stochastic systems. ASA is developed to statistically find the best global fit of a nonlinear non-convex cost-function over a D-dimensional space. This algorithm permits an annealing schedule for 'temperature' T decreasing exponentially in annealing-time k, T = T_0 exp(-c k^1/D). The introduction of re-annealing also permits adaptation to changing sensitivities in the multi-dimensional parameter-space. This annealing schedule is faster than fast Cauchy annealing, where T = T_0/k, and much faster than Boltzmann annealing, where T = T_0/ln k. Babylon · FTP site: ftp.gmd.de/gmd/ai-research/Software/Babylon/ BABYLON is a modular, configurable, hybrid environment for developing expert systems. Its features include objects, rules with forward and backward chaining, logic (Prolog) and constraints. BABYLON is implemented and embedded in Common Lisp. CLEARS · Web site: www.coli.uni-sb.de/~clears/ The CLEARS system is an interactive graphical environment for computational semantics. The tool allows exploration and comparison of different semantic formalisms, and their interaction with syntax. This enables the user to get an idea of the range of possibilities of semantic construction, and also where there is real convergence between theories. CLIG · Web site: www.ags.uni-sb.de/~konrad/clig.html CLIG is an interactive, extendible grapher for visualizing linguistic data structures like trees, feature structures, Discourse Representation Structures (DRS), logical formulas etc. All of these can be freely mixed and embedded into each other. The grapher has been designed both to be stand-alone and to be used as an add-on for linguistic applications which display their output in a graphical manner. CLIPS · Web site: www.jsc.nasa.gov/~clips/CLIPS.html · FTP site: cs.cmu.edu/afs/cs.cmu.edu/project/ai- repository/ai/areas/expert/systems/clips CLIPS is a productive development and delivery expert system tool which provides a complete environment for the construction of rule and/or object based expert systems. CLIPS provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented and procedural. Rule- based programming allows knowledge to be represented as heuristics, or "rules of thumb," which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or to create new components). The procedural programming capabilities provided by CLIPS are similar to capabilities found in languages such as C, Pascal, Ada, and LISP. EMA-XPS - A Hybrid Graphic Expert System Shell · Web site: wmwap1.math.uni-wuppertal.de:80/EMA-XPS/ EMA-XPS is a hybrid graphic expert system shell based on the ASCII-oriented shell Babylon 2.3 of the German National Research Center for Computer Sciences (GMD). In addition to Babylon's AI- power (object oriented data representation, forward and backward chained rules - collectible into sets, horn clauses, and constraint networks) a graphic interface based on the X11 Window System and the OSF/Motif Widget Library has been provided. FOOL & FOX · FTP site: ntia.its.bldrdoc.gov/pub/fuzzy/prog/ FOOL stands for the Fuzzy Organizer OLdenburg. It is a result from a project at the University of Oldenburg. FOOL is a graphical user interface to develop fuzzy rulebases. FOOL will help you to invent and maintain a database that specifies the behavior of a fuzzy-controller or something like that. FOX is a small but powerful fuzzy engine which reads this database, reads some input values and calculates the new control value. FUF and SURGE · Web site: www.dfki.de/lt/registry/generation/fuf.html · FTP site: ftp.cs.columbia.edu/pub/fuf/ FUF is an extended implementation of the formalism of functional unification grammars (FUGs) introduced by Martin Kay specialized to the task of natural language generation. It adds the following features to the base formalism: · Types and inheritance. · Extended control facilities (goal freezing, intelligent backtracking). · Modular syntax. These extensions allow the development of large grammars which can be processed efficiently and can be maintained and understood more easily. SURGE is a large syntactic realization grammar of English written in FUF. SURGE is developed to serve as a black box syntactic generation component in a larger generation system that encapsulates a rich knowledge of English syntax. SURGE can also be used as a platform for exploration of grammar writing with a generation perspective. The Grammar Workbench · Web site: www.cs.kun.nl/agfl/GWB.html The Grammar Workbench, or GWB for short, is an environment for the comfortable development of Affix Grammars in the AGFL- formalism. Its purposes are: · to allow the user to input, inspect and modify a grammar; · to perform consistency checks on the grammar; · to compute grammar properties; · to generate example sentences; · to assist in performing grammar transformations. GSM Suite · Web site: www.slip.net/~andrewm/gsm/ The GSM Suite is a set of programs for using Finite State Machines in a graphical fashion. The suite consists of programs that edit, compile, and print state machines. Included in the suite is an editor program, gsmedit, a compiler, gsm2cc, that produces a C++ implementation of a state machine, a PostScript generator, gsm2ps, and two other minor programs. GSM is licensed under the GNU Public License and so is free for your use under the terms of that license. Illuminator · Web site: documents.cfar.umd.edu/resources/source/illuminator.html Illuminator is a toolset for developing OCR and Image Understanding applications. Illuminator has two major parts: a library for representing, storing and retrieving OCR information, heretofore called dafslib, and an X-Windows "DAFS" file viewer, called illum. Illuminator and DAFS lib were designed to supplant existing OCR formats and become a standard in the industry. They particularly are extensible to handle more than just English. The features of this release: · 5 magnification levels for images · flagged characters and words · unicode support -- American, British, French, German, Greek, Italian, MICR, Norwegian, Russian, Spanish, Swedish, keyboards · reads DAFS, TIFF's, PDA's (image only) · save to DAFS, ASCII/UTF or Unicode · Entity Viewer - shows properties, character choices, bounding boxes image fragment for a selected entity, change type, change content, hierarchy mode Jess, the Java Expert System Shell · Web site: herzberg.ca.sandia.gov/jess/ Jess is a clone of the popular CLIPS expert system shell written entirely in Java. With Jess, you can conveniently give your applets the ability to 'reason'. Jess is compatible with all versions of Java starting with version 1.0.2. Jess implements the following constructs from CLIPS: defrules, deffunctions, defglobals, deffacts, and deftemplates. learn · FTP site: sunsite.unc.edu/pub/Linux/apps/cai/ Learn is a vocable learning program with memory model. Otter: An Automated Deduction System · Web site: www-unix.mcs.anl.gov/AR/otter/ Our current automated deduction system Otter is designed to prove theorems stated in first-order logic with equality. Otter's inference rules are based on resolution and paramodulation, and it includes facilities for term rewriting, term orderings, Knuth-Bendix completion, weighting, and strategies for directing and restricting searches for proofs. Otter can also be used as a symbolic calculator and has an embedded equational programming system. NICOLE · Web site: nicole.sourceforge.net It is an attempt to simulate a conversation by learning how words are related to other words. A Human communicates with NICOLE via the keyboard and NICOLE responds back with its own sentences which are automatically generated, based on what NICOLE has stored in it's database. Each new sentence that has been typed in, and NICOLE doesn't know about it, it is included into NICOLE's database, thus extending the knowledge base of NICOLE. PVS · Web site: pvs.csl.sri.com/ PVS is a verification system: that is, a specification language integrated with support tools and a theorem prover. It is intended to capture the state-of-the-art in mechanized formal methods and to be sufficiently rugged that it can be used for significant applications. PVS is a research prototype: it evolves and improves as we develop or apply new capabilities, and as the stress of real use exposes new requirements. RIPPER · Web site: www.research.att.com/~wcohen/ripperd.html Ripper is a system for fast effective rule induction. Given a set of data, Ripper will learn a set of rules that will predict the patterns in the data. Ripper is written in ASCI C and comes with documentation and some sample problems. SNePS · Web site: www.cs.buffalo.edu/pub/sneps/WWW/ · FTP site: ftp.cs.buffalo.edu/pub/sneps/ The long-term goal of The SNePS Research Group is the design and construction of a natural-language-using computerized cognitive agent, and carrying out the research in artificial intelligence, computational linguistics, and cognitive science necessary for that endeavor. The three-part focus of the group is on knowledge representation, reasoning, and natural-language understanding and generation. The group is widely known for its development of the SNePS knowledge representation/reasoning system, and Cassie, its computerized cognitive agent. Soar · Web site: bigfoot.eecs.umich.edu/~soar/ · FTP site: cs.cmu.edu/afs/cs/project/soar/public/Soar6/ Soar has been developed to be a general cognitive architecture. We intend ultimately to enable the Soar architecture to: · work on the full range of tasks expected of an intelligent agent, from highly routine to extremely difficult, open-ended problems · represent and use appropriate forms of knowledge, such as procedural, declarative, episodic, and possibly iconic · employ the full range of problem solving methods · interact with the outside world and · learn about all aspects of the tasks and its performance on them. In other words, our intention is for Soar to support all the capabilities required of a general intelligent agent. http://wwwis.cs.utwente.nl:8080/ tcm/index.html TCM · Web site: wwwis.cs.utwente.nl:8080/~tcm/index.html · FTP site: ftp.cs.vu.nl/pub/tcm/ TCM (Toolkit for Conceptual Modeling) is our suite of graphical editors. TCM contains graphical editors for Entity-Relationship diagrams, Class-Relationship diagrams, Data and Event Flow diagrams, State Transition diagrams, Jackson Process Structure diagrams and System Network diagrams, Function Refinement trees and various table editors, such as a Function-Entity table editor and a Function Decomposition table editor. TCM is easy to use and performs numerous consistency checks, some of them immediately, some of them upon request. WEKA · Web site: lucy.cs.waikato.ac.nz/~ml/ WEKA (Waikato Environment for Knowledge Analysis) is an state- of-the-art facility for applying machine learning techniques to practical problems. It is a comprehensive software "workbench" that allows people to analyse real-world data. It integrates different machine learning tools within a common framework and a uniform user interface. It is designed to support a "simplicity- first" methodology, which allows users to experiment interactively with simple machine learning tools before looking for more complex solutions. 3. Connectionism Connectionism is a technical term for a group of related techniques. These techniques include areas such as Artificial Neural Networks, Semantic Networks and a few other similar ideas. My present focus is on neural networks (though I am looking for resources on the other techniques). Neural networks are programs designed to simulate the workings of the brain. They consist of a network of small mathematical-based nodes, which work together to form patterns of information. They have tremendous potential and currently seem to be having a great deal of success with image processing and robot control. 3.1. Connectionist class/code libraries These are libraries of code or classes for use in programming within the Connectionist field. They are not meant as stand alone applications, but rather as tools for building your own applications. ANSI-C Neural Networks · Web site: www.geocities.com/CapeCanaveral/1624/ This site contains ANSC-C source code for 8 types of neural nets, including: · Adaline Network · Backpropagation · Hopfield Model · (BAM) Bidirectional Associative Memory · Boltzmann Machine · Counterpropagation · (SOM) Self-Organizing Map · (ART1) Adaptive Resonance Theory They were designed to help turn the theory of a particular network model into the design for a simulator implementation , and to help with embeding an actual application into a particular network model. BELIEF · Web site: www.cs.cmu.edu/afs/cs/project/ai- repository/ai/areas/reasonng/probabl/belief/ BELIEF is a Common Lisp implementation of the Dempster and Kong fusion and propagation algorithm for Graphical Belief Function Models and the Lauritzen and Spiegelhalter algorithm for Graphical Probabilistic Models. It includes code for manipulating graphical belief models such as Bayes Nets and Relevance Diagrams (a subset of Influence Diagrams) using both belief functions and probabilities as basic representations of uncertainty. It uses the Shenoy and Shafer version of the algorithm, so one of its unique features is that it supports both probability distributions and belief functions. It also has limited support for second order models (probability distributions on parameters). bpnn.py · Web site: www.enme.ucalgary.ca/~nascheme/python/ A simple back-propogation ANN in Python. CONICAL · Web site: strout.net/conical/ CONICAL is a C++ class library for building simulations common in computational neuroscience. Currently its focus is on compartmental modeling, with capabilities similar to GENESIS and NEURON. A model neuron is built out of compartments, usually with a cylindrical shape. When small enough, these open-ended cylinders can approximate nearly any geometry. Future classes may support reaction-diffusion kinetics and more. A key feature of CONICAL is its cross-platform compatibility; it has been fully co-developed and tested under Unix, DOS, and Mac OS. IDEAL · Web site: www.rpal.rockwell.com/ideal.html IDEAL is a test bed for work in influence diagrams and Bayesian networks. It contains various inference algorithms for belief networks and evaluation algorithms for influence diagrams. It contains facilities for creating and editing influence diagrams and belief networks. IDEAL is written in pure Common Lisp and so it will run in Common Lisp on any platform. The emphasis in writing IDEAL has been on code clarity and providing high level programming abstractions. It thus is very suitable for experimental implementations which need or extend belief network technology. At the highest level, IDEAL can be used as a subroutine library which provides belief network inference and influence diagram evaluation as a package. The code is documented in a detailed manual and so it is also possible to work at a lower level on extensions of belief network methods. IDEAL comes with an optional graphic interface written in CLIM. If your Common Lisp also has CLIM, you can run the graphic interface. Matrix Class · FTP site: ftp.cs.ucla.edu/pub/ A simple, fast, efficient C++ Matrix class designed for scientists and engineers. The Matrix class is well suited for applications with complex math algorithms. As an demonstration of the Matrix class, it was used to implement the backward error propagation algorithm for a multi-layer feed-forward artificial neural network. nunu · Web site: ruby.ddiworld.com/jreed/web/software/nn.html nunu is a multi-layered, scriptable, back-propagation neural network. It is build to be used for intensive computation problems scripted in shell scripts. It is written in C++ using the STL. nn is based on material from the "Introduction to the Theory of Neural Computation" by John Hertz, Anders Krogh, and Richard G. Palmer, chapter 6. Pulcinella · Web site: iridia.ulb.ac.be/pulcinella/Welcome.html Pulcinella is written in CommonLisp, and appears as a library of Lisp functions for creating, modifying and evaluating valuation systems. Alternatively, the user can choose to interact with Pulcinella via a graphical interface (only available in Allegro CL). Pulcinella provides primitives to build and evaluate uncertainty models according to several uncertainty calculi, including probability theory, possibility theory, and Dempster- Shafer's theory of belief functions; and the possibility theory by Zadeh, Dubois and Prade's. A User's Manual is available on request. S-ElimBel · Web site (???): www.spaces.uci.edu/thiery/elimbel/ S-ElimBel is an algorithm that computes the belief in a Bayesian network, implemented in MIT-Scheme. This algorithm has the particularity of being rather easy to understand. Moreover, one can apply it to any kind of Bayesian network - it being singly connected or muliply connected. It is, however, less powerful than the standard algorithm of belief propagation. Indeed, the computation has to be reconducted entirely for each new evidence added to the network. Also, one needs to run the algorithm as many times as one has nodes for which the belief is wanted. Software for Flexible Bayesian Modeling · Web site: www.cs.utoronto.ca/~radford/fbm.software.html This software implements flexible Bayesian models for regression and classification applications that are based on multilayer perceptron neural networks or on Gaussian processes. The implementation uses Markov chain Monte Carlo methods. Software modules that support Markov chain sampling are included in the distribution, and may be useful in other applications. Spiderweb2 · Web site: www.cs.nyu.edu/~klap7794/spiderweb2.html A C++ artificial neual net library. Spiderweb2 is a complete rewrite of the original Spiderweb library, it has grown into a much more flexible and object-oriented system. The biggest change is that each neuron object is responsible for its own activations and updates, with the network providing only the scheduling aspect. This is a very powerful change, and it allows easy modification and experimentation with various network architectures and neuron types. Symbolic Probabilistic Inference (SPI) · FTP site: ftp.engr.orst.edu/pub/dambrosi/spi/ · Paper (ijar-94.ps): ftp.engr.orst.edu/pub/dambrosi/ Contains Common Lisp function libraries to implement SPI type baysean nets. Documentation is very limited. Features: · Probabilities, Local Expression Language Utilities, Explanation, Dynamic Models, and a TCL/TK based GUI. TresBel · FTP site: iridia.ulb.ac.be/pub/hongxu/software/ Libraries containing (Allegro) Common Lisp code for Belief Functions (aka. Dempster-Shafer evidential reasoning) as a representation of uncertainty. Very little documentation. Has a limited GUI. Various (C++) Neural Networks · Web site: www.dontveter.com/nnsoft/nnsoft.html Example neural net codes from the book, The Pattern Recognition Basics of AI. These are simple example codes of these various neural nets. They work well as a good starting point for simple experimentation and for learning what the code is like behind the simulators. The types of networks available on this site are: (implemented in C++) · The Backprop Package · The Nearest Neighbor Algorithms · The Interactive Activation Algorithm · The Hopfield and Boltzman machine Algorithms · The Linear Pattern Classifier · ART I · Bi-Directional Associative Memory · The Feedforward Counter-Propagation Network 3.2. Connectionist software kits/applications These are various applications, software kits, etc. meant for research in the field of Connectionism. Their ease of use will vary, as they were designed to meet some particular research interest more than as an easy to use commercial package. Aspirin - MIGRAINES (am6.tar.Z on ftp site) · FTP site: sunsite.unc.edu/pub/academic/computer-science/neural- networks/programs/Aspirin/ The software that we are releasing now is for creating, and evaluating, feed-forward networks such as those used with the backpropagation learning algorithm. The software is aimed both at the expert programmer/neural network researcher who may wish to tailor significant portions of the system to his/her precise needs, as well as at casual users who will wish to use the system with an absolute minimum of effort. DDLab · Web site: www.santafe.edu/~wuensch/ddlab.html · FTP site: ftp.santafe.edu/pub/wuensch/ DDLab is an interactive graphics program for research into the dynamics of finite binary networks, relevant to the study of complexity, emergent phenomena, neural networks, and aspects of theoretical biology such as gene regulatory networks. A network can be set up with any architecture between regular CA (1d or 2d) and "random Boolean networks" (networks with arbitrary connections and heterogeneous rules). The network may also have heterogeneous neighborhood sizes. GENESIS · Web site: www.bbb.caltech.edu/GENESIS/ · FTP site: genesis.bbb.caltech.edu/pub/genesis/ GENESIS (short for GEneral NEural SImulation System) is a general purpose simulation platform which was developed to support the simulation of neural systems ranging from complex models of single neurons to simulations of large networks made up of more abstract neuronal components. GENESIS has provided the basis for laboratory courses in neural simulation at both Caltech and the Marine Biological Laboratory in Woods Hole, MA, as well as several other institutions. Most current GENESIS applications involve realistic simulations of biological neural systems. Although the software can also model more abstract networks, other simulators are more suitable for backpropagation and similar connectionist modeling. JavaBayes · Web site: www.cs.cmu.edu/People/javabayes/index.html/ The JavaBayes system is a set of tools, containing a graphical editor, a core inference engine and a parser. JavaBayes can produce: · the marginal distribution for any variable in a network. · the expectations for univariate functions (for example, expected value for variables). · configurations with maximum a posteriori probability. · configurations with maximum a posteriori expectation for univariate functions. Jbpe · Web site: cs.felk.cvut.cz/~koutnij/studium/jbpe.html Jbpe is a back-propagation neural network editor/simulator. Features · Standart back-propagation networks creation. · Saving network as a text file, which can be edited and loaded back. · Saving/loading binary file · Learning from a text file (with structure specified below), number of learning periods / desired network energy can be specified as a criterion. · Network recall Neural Network Generator · Web site: www.idsia.ch/~rafal/research.html · FTP site: >ftp.idsia.ch/pub/rafal The Neural Network Generator is a genetic algorithm for the topological optimization of feedforward neural networks. It implements the Semantic Changing Genetic Algorithm and the Unit- Cluster Model. The Semantic Changing Genetic Algorithm is an extended genetic algorithm that allows fast dynamic adaptation of the genetic coding through population analysis. The Unit- Cluster Model is an approach to the construction of modular feedforward networks with a ''backbone'' structure. NOTE: To compile this on Linux requires one change in the Makefiles. You will need to change '-ltermlib' to '-ltermcap'. Neureka ANS (nn/xnn) · Web site: www.bgif.no/neureka/ · FTP site: ftp.ii.uib.no/pub/neureka/ nn is a high-level neural network specification language. The current version is best suited for feed-forward nets, but recurrent models can and have been implemented, e.g. Hopfield nets, Jordan/Elman nets, etc. In nn, it is easy to change network dynamics. The nn compiler can generate C code or executable programs (so there must be a C compiler available), with a powerful command line interface (but everything may also be controlled via the graphical interface, xnn). It is possible for the user to write C routines that can be called from inside the nn specification, and to use the nn specification as a function that is called from a C program. Please note that no programming is necessary in order to use the network models that come with the system (`netpack'). xnn is a graphical front end to networks generated by the nn compiler, and to the compiler itself. The xnn graphical interface is intuitive and easy to use for beginners, yet powerful, with many possibilities for visualizing network data. NOTE: You have to run the install program that comes with this to get the license key installed. It gets put (by default) in /usr/lib. If you (like myself) want to install the package somewhere other than in the /usr directory structure (the install program gives you this option) you will have to set up some environmental variables (NNLIBDIR & NNINCLUDEDIR are required). You can read about these (and a few other optional variables) in appendix A of the documentation (pg 113). NEURON · Web site: www.neuron.yale.edu/neuron.html · FTP site: ftp.neuron.yale.edu/neuron/unix/ NEURON is an extensible nerve modeling and simulation program. It allows you to create complex nerve models by connecting multiple one-dimensional sections together to form arbitrary cell morphologies, and allows you to insert multiple membrane properties into these sections (including channels, synapses, ionic concentrations, and counters). The interface was designed to present the neural modeler with a intuitive environment and hide the details of the numerical methods used in the simulation. PDP++ · Web site: www.cnbc.cmu.edu/PDP++/ · FTP site (US): cnbc.cmu.edu/pub/pdp++/ · FTP site (Europe): unix.hensa.ac.uk/mirrors/pdp++/ As the field of Connectionist modeling has grown, so has the need for a comprehensive simulation environment for the development and testing of Connectionist models. Our goal in developing PDP++ has been to integrate several powerful software development and user interface tools into a general purpose simulation environment that is both user friendly and user extensible. The simulator is built in the C++ programming language, and incorporates a state of the art script interpreter with the full expressive power of C++. The graphical user interface is built with the Interviews toolkit, and allows full access to the data structures and processing modules out of which the simulator is built. We have constructed several useful graphical modules for easy interaction with the structure and the contents of neural networks, and we've made it possible to change and adapt many things. At the programming level, we have set things up in such a way as to make user extensions as painless as possible. The programmer creates new C++ objects, which might be new kinds of units or new kinds of processes; once compiled and linked into the simulator, these new objects can then be accessed and used like any other. RNS · Web site: www.cs.cmu.edu/afs/cs/project/ai- repository/ai/areas/neural/systems/rns/ RNS (Recurrent Network Simulator) is a simulator for recurrent neural networks. Regular neural networks are also supported. The program uses a derivative of the back-propagation algorithm, but also includes other (not that well tested) algorithms. Features include · freely choosable connections, no restrictions besides memory or CPU constraints · delayed links for recurrent networks · fixed values or thresholds can be specified for weights · (recurrent) back-propagation, Hebb, differential Hebb, simulated annealing and more · patterns can be specified with bits, floats, characters, numbers, and random bit patterns with Hamming distances can be chosen for you · user definable error functions · output results can be used without modification as input Simple Neural Net (in Python) · Web site: starship.python.net/crew/amk/unmaintained/ Simple neural network code, which implements a class for 3-level networks (input, hidden, and output layers). The only learning rule implemented is simple backpropagation. No documentation (or even comments) at all, because this is simply code that I use to experiment with. Includes modules containing sample datasets from Carl G. Looney's NN book. Requires the Numeric extensions. SCNN · Web site: apx00.physik.uni-frankfurt.de/e_ag_rt/SCNN/ SCNN is an universal simulating system for Cellular Neural Networks (CNN). CNN are analog processing neural networks with regular and local interconnections, governed by a set of nonlinear ordinary differential equations. Due to their local connectivity, CNN are realized as VLSI chips, which operates at very high speed. Semantic Networks in Python · Web site: strout.net/info/coding/python/ai/index.html The semnet.py module defines several simple classes for building and using semantic networks. A semantic network is a way of representing knowledge, and it enables the program to do simple reasoning with very little effort on the part of the programmer. The following classes are defined: · Entity: This class represents a noun; it is something which can be related to other things, and about which you can store facts. · Relation: A Relation is a type of relationship which may exist between two entities. One special relation, "IS_A", is predefined because it has special meaning (a sort of logical inheritance). · Fact: A Fact is an assertion that a relationship exists between two entities. With these three object types, you can very quickly define knowledge about a set of objects, and query them for logical conclusions. SNNS · Web site: www.informatik.uni-stuttgart.de/ipvr/bv/projekte/snns/ · FTP site: ftp.informatik.uni-stuttgart.de/pub/SNNS/ Stuttgart Neural Net Simulator (version 4.1). An awesome neural net simulator. Better than any commercial simulator I've seen. The simulator kernel is written in C (it's fast!). It supports over 20 different network architectures, has 2D and 3D X-based graphical representations, the 2D GUI has an integrated network editor, and can generate a separate NN program in C. SNNS is very powerful, though a bit difficult to learn at first. To help with this it comes with example networks and tutorials for many of the architectures. ENZO, a supplementary system allows you to evolve your networks with genetic algorithms. There is a debian package of SNNS available. So just get it (and use alien to convert it to RPM if you need to). SPRLIB/ANNLIB · Web site: www.ph.tn.tudelft.nl/~sprlib/ SPRLIB (Statistical Pattern Recognition Library) was developed to support the easy construction and simulation of pattern classifiers. It consist of a library of functions (written in C) that can be called from your own program. Most of the well-known classifiers are present (k-nn, Fisher, Parzen, ....), as well as error estimation and dataset generation routines. ANNLIB (Artificial Neural Networks Library) is a neural network simulation library based on the data architecture laid down by SPRLIB. The library contains numerous functions for creating, training and testing feed-forward networks. Training algorithms include back-propagation, pseudo-Newton, Levenberg-Marquardt, conjugate gradient descent, BFGS.... Furthermore, it is possible - due to the datastructures' general applicability - to build Kohonen maps and other more exotic network architectures using the same data types. TOOLDIAG · Web site: www.inf.ufes.br/~thomas/www/home/tooldiag.html · FTP site: ftp.inf.ufes.br/pub/tooldiag/ TOOLDIAG is a collection of methods for statistical pattern recognition. The main area of application is classification. The application area is limited to multidimensional continuous features, without any missing values. No symbolic features (attributes) are allowed. The program in implemented in the 'C' programming language and was tested in several computing environments. 4. Evolutionary Computing Evolutionary computing is actually a broad term for a vast array of programming techniques, including genetic algorithms, complex adaptive systems, evolutionary programming, etc. The main thrust of all these techniques is the idea of evolution. The idea that a program can be written that will evolve toward a certain goal. This goal can be anything from solving some engineering problem to winning a game. 4.1. EC class/code libraries These are libraries of code or classes for use in programming within the evolutionary computation field. They are not meant as stand alone applications, but rather as tools for building your own applications. daga · Web site: GARAGe.cps.msu.edu/software/software-index.html daga is an experimental release of a 2-level genetic algorithm compatible with the GALOPPS GA software. It is a meta-GA which dynamically evolves a population of GAs to solve a problem presented to the lower-level GAs. When multiple GAs (with different operators, parameter settings, etc.) are simultaneously applied to the same problem, the ones showing better performance have a higher probability of surviving and "breeding" to the next macro-generation (i.e., spawning new "daughter"-GAs with characteristics inherited from the parental GA or GAs. In this way, we try to encourage good problem- solving strategies to spread to the whole population of GAs. EO · Web site: geneura.ugr.es/~jmerelo/EO.html EO is a templates-based, ANSI-C++ compliant evolutionary computation library. It contains classes for any kind of evolutionary computation (specially genetic algorithms) you might come up to. It is component-based, so that if you don't find the class you need in it, it is very easy to subclass existing abstract or concrete class. FORTRAN GA · Web site: www.staff.uiuc.edu/~carroll/ga.html This program is a FORTRAN version of a genetic algorithm driver. This code initializes a random sample of individuals with different parameters to be optimized using the genetic algorithm approach, i.e. evolution via survival of the fittest. The selection scheme used is tournament selection with a shuffling technique for choosing random pairs for mating. The routine includes binary coding for the individuals, jump mutation, creep mutation, and the option for single-point or uniform crossover. Niching (sharing) and an option for the number of children per pair of parents has been added. More recently, an option for the use of a micro-GA has been added. GAGS · Web site: kal-el.ugr.es/gags.html · FTP site: kal-el.ugr.es/GAGS/ Genetic Algorithm application generator and class library written mainly in C++. As a class library, and among other thing, GAGS includes: · A chromosome hierarchy with variable length chromosomes. Genetic operators: 2-point crossover, uniform crossover, bit- flip mutation, transposition (gene interchange between 2 parts of the chromosome), and variable-length operators: duplication, elimination, and random addition. · Population level operators include steady state, roulette wheel and tournament selection. · Gnuplot wrapper: turns gnuplot into a iostreams-like class. · Easy sample file loading and configuration file parsing. As an application generator (written in PERL), you only need to supply it with an ANSI-C or C++ fitness function, and it creates a C++ program that uses the above library to 90% capacity, compiles it, and runs it, saving results and presenting fitness thru gnuplot. GAlib: Matthew's Genetic Algorithms Library · Web Site: lancet.mit.edu/ga/ · FTP site: lancet.mit.edu/pub/ga/ · Register GAlib at: lancet.mit.edu/ga/Register.html GAlib contains a set of C++ genetic algorithm objects. The library includes tools for using genetic algorithms to do optimization in any C++ program using any representation and genetic operators. The documentation includes an extensive overview of how to implement a genetic algorithm as well as examples illustrating customizations to the GAlib classes. GALOPPS · Web site: GARAGe.cps.msu.edu/software/software-index.html · FTP site: garage.cps.msu.edu/pub/GA/galopps/ GALOPPS is a flexible, generic GA, in 'C'. It was based upon Goldberg's Simple Genetic Algorithm (SGA) architecture, in order to make it easier for users to learn to use and extend. GALOPPS extends the SGA capabilities several fold: · (optional) A new Graphical User Interface, based on TCL/TK, for Unix users, allowing easy running of GALOPPS 3.2 (single or multiple subpopulations) on one or more processors. GUI writes/reads "standard" GALOPPS input and master files, and displays graphical output (during or after run) of user-selected variables. · 5 selection methods: roulette wheel, stochastic remainder sampling, tournament selection, stochastic universal sampling, linear-ranking-then-SUS. · Random or superuniform initialization of "ordinary" (non- permutation) binary or non-binary chromosomes; random initialization of permutation-based chromosomes; or user- supplied initialization of arbitrary types of chromosomes. · Binary or non-binary alphabetic fields on value-based chromosomes, including different user-definable field sizes. · 3 crossovers for value-based representations: 1-pt, 2-pt, and uniform, all of which operate at field boundaries if a non- binary alphabet is used. · 4 crossovers for order-based reps: PMX, order-based, uniform order-based, and cycle. · 4 mutations: fast bitwise, multiple-field, swap and random sublist scramble. · Fitness scaling: linear scaling, Boltzmann scaling, sigma truncation, window scaling, ranking. · Plus a whole lot more.... GAS · Web site: starship.skyport.net/crew/gandalf · FTP site: ftp.coe.uga.edu/users/jae/ai GAS means "Genetic Algorithms Stuff". GAS is freeware. Purpose of GAS is to explore and exploit artificial evolutions. Primary implementation language of GAS is Python. The GAS software package is meant to be a Python framework for applying genetic algorithms. It contains an example application where it is tried to breed Python program strings. This special problem falls into the category of Genetic Programming (GP), and/or Automatic Programming. Nevertheless, GAS tries to be useful for other applications of Genetic Algorithms as well. GECO · FTP site: ftp://ftp.aic.nrl.navy.mil/pub/galist/src/ GECO (Genetic Evolution through Combination of Objects), an extendible object-oriented tool-box for constructing genetic algorithms (in Lisp). It provides a set of extensible classes and methods designed for generality. Some simple examples are also provided to illustrate the intended use. GPdata · FTP site: ftp.cs.bham.ac.uk/pub/authors/W.B.Langdon/gp-code/ · Documentation (GPdata-icga-95.ps): cs.ucl.ac.uk/genetic/papers/ GPdata-3.0.tar.gz (C++) contains a version of Andy Singleton's GP-Quick version 2.1 which has been extensively altered to support: · Indexed memory operation (cf. teller) · multi tree programs · Adfs · parameter changes without recompilation · populations partitioned into demes · (A version of) pareto fitness This ftp site also contains a small C++ program (ntrees.cc) to calculate the number of different there are of a given length and given function and terminal set. gpjpp Genetic Programming in Java · [Dead Link] Web site: http://www.turbopower.com/~kimk/gpjpp.asp · Anyone who knows where to find gpjpp, please let me know. gpjpp is a Java package I wrote for doing research in genetic programming. It is a port of the gpc++ kernel written by Adam Fraser and Thomas Weinbrenner. Included in the package are four of Koza's standard examples: the artificial ant, the hopping lawnmower, symbolic regression, and the boolean multiplexer. Here is a partial list of its features: · graphic output of expression trees · efficient diversity checking · Koza's greedy over-selection option for large populations · extensible GPRun class that encapsulates most details of a genetic programming test · more robust and efficient streaming code, with automatic checkpoint and restart built into the GPRun class · an explicit complexity limit that can be set on each GP · additional configuration variables to allow more testing without recompilation · support for automatically defined functions (ADFs) · tournament and fitness proportionate selection · demetic grouping · optional steady state population · subtree crossover · swap and shrink mutation GP Kernel · Web site (???): www.emk.e-technik.th- darmstadt.de/~thomasw/gp.html The GP kernel is a C++ class library that can be used to apply genetic programming techniques to all kinds of problems. The library defines a class hierarchy. An integral component is the ability to produce automatically defined functions as found in Koza's "Genetic Programming II". Technical documentation (postscript format) is included. There is also a short introduction into genetic programming. Functionality includes; Automatically defined functions (ADFs), tournament and fitness proportionate selection, demetic grouping, optional steady state genetic programming kernel, subtree crossover, swap and shrink mutation, a way of changing every parameter of the system without recompilation, capacity for multiple populations, loading and saving of populations and genetic programs, standard random number generator, internal parameter checks. lil-gp · Web site: GARAGe.cps.msu.edu/software/software-index.html#lilgp · FTP site: garage.cps.msu.edu/pub/GA/lilgp/ patched lil-gp * · Web site: www.cs.umd.edu/users/seanl/gp/ lil-gp is a generic 'C' genetic programming tool. It was written with a number of goals in mind: speed, ease of use and support for a number of options including: · Generic 'C' program that runs on UNIX workstations · Support for multiple population experiments, using arbitrary and user settable topologies for exchange, for a single processor (i.e., you can do multiple population gp experiments on your PC). · lil-gp manipulates trees of function pointers which are allocated in single, large memory blocks for speed and to avoid swapping. * The patched lil-gp kernel is strongly-typed, with modifications on multithreading, coevolution, and other tweaks and features. PGAPack Parallel Genetic Algorithm Library · Web site: www.mcs.anl.gov/~levine/PGAPACK/ · FTP site: ftp.mcs.anl.gov/pub/pgapack/ PGAPack is a general-purpose, data-structure-neutral, parallel genetic algorithm library. It is intended to provide most capabilities desired in a genetic algorithm library, in an integrated, seamless, and portable manner. Key features are in PGAPack V1.0 include: · Callable from Fortran or C. · Runs on uniprocessors, parallel computers, and workstation networks. · Binary-, integer-, real-, and character-valued native data types. · Full extensibility to support custom operators and new data types. · Easy-to-use interface for novice and application users. · Multiple levels of access for expert users. · Parameterized population replacement. · Multiple crossover, mutation, and selection operators. · Easy integration of hill-climbing heuristics. · Extensive debugging facilities. · Large set of example problems. · Detailed users guide. PIPE · Web site: www.idsia.ch/~rafal/research.html · FTP site: ftp.idsia.ch/pub/rafal Probabilistic Incremental Program Evolution (PIPE) is a novel technique for automatic program synthesis. The software is written in C. It · is easy to install (comes with an automatic installation tool). · is easy to use: setting up PIPE_V1.0 for different problems requires a minimal amount of programming. User-written, application- independent program parts can easily be reused. · is efficient: PIPE_V1.0 has been tuned to speed up performance. · is portable: comes with source code (optimized for SunOS 5.5.1). · is extensively documented(!) and contains three example applications. · supports statistical evaluations: it facilitates running multiple experiments and collecting results in output files. · includes testing tool for testing generalization of evolved programs. · supports floating point and integer arithmetic. · has extensive output features. · For lil-gp users: Problems set up for lil-gp 1.0 can be easily ported to PIPE_v1.0. The testing tool can also be used to process programs evolved by lil-gp 1.0. Sugal · Web site: www.trajan-software.demon.co.uk/sugal.htm Sugal [soo-gall] is the SUnderland Genetic ALgorithm system. The aim of Sugal is to support research and implementation in Genetic Algorithms on a common software platform. As such, Sugal supports a large number of variants of Genetic Algorithms, and has extensive features to support customization and extension. 4.2. EC software kits/applications These are various applications, software kits, etc. meant for research in the field of evolutionary computing. Their ease of use will vary, as they were designed to meet some particular research interest more than as an easy to use commercial package. ADATE · Web site: www-ia.hiof.no/~rolando/adate_intro.html ADATE (Automatic Design of Algorithms Through Evolution) is a system for automatic programming i.e., inductive inference of algorithms, which may be the best way to develop artificial and general intelligence. The ADATE system can automatically generate non-trivial and novel algorithms. Algorithms are generated through large scale combinatorial search that employs sophisticated program transformations and heuristics. The ADATE system is particularly good at synthesizing symbolic, functional programs and has several unique qualities. esep & xesep · Web site(esep): www.iit.edu/~linjinl/esep.html · Web site(xesep): www.iit.edu/~linjinl/xesep.html This is a new scheduler, called Evolution Scheduler, based on Genetic Algorithms and Evolutionary Programming. It lives with original Linux priority scheduler.This means you don't have to reboot to change the scheduling policy. You may simply use the manager program esep to switch between them at any time, and esep itself is an all-in-one for scheduling status, commands, and administration. We didn't intend to remove the original priority scheduler; instead, at least, esep provides you with another choice to use a more intelligent scheduler, which carries out natural competition in an easy and effective way. Xesep is a graphical user interface to the esep (Evolution Scheduling and Evolving Processes). It's intended to show users how to start, play, and feel the Evolution Scheduling and Evolving Processes, including sub-programs to display system status, evolving process status, queue status, and evolution scheduling status periodically in as small as one mini-second. Corewars · Web site: corewars.sourceforge.net/ · SourceForge site: sourceforge.net/project/?group_id=3054 Corewars is a game which simulates a virtual machine with a number of programs. Each program tries to crash the others. The program that lasts the longest time wins. A number of sample programs are provided and new programs can be written by the player. Screenshots are available at the Corewars homepage. Corewar VM · Web site: www.jedi.claranet.fr/ This is a virtual machine written in Java (so it is a virtual machine for another virtual machine !) for a Corewar game. FSM-Evolver · Web site (???): pages.prodigy.net/czarneckid A Java (jdk-v1.0.2+) code library that is used to evolve finite state machines. The problem included in the package is the Artificial Ant problem. You should be able to compile the .java files and then run: java ArtificialAnt. GPsys · Web site: www.cs.ucl.ac.uk/staff/A.Qureshi/gpsys.html GPsys (pronounced gipsys) is a Java (requires Java 1.1 or later) based Genetic Programming system developed by Adil Qureshi. The software includes documentation, source and executables. Feature Summary: · Steady State engine · ADF support · Strongly Typed 1. supports generic functions and terminals 2. has many built-in primitives 3. includes indexed memory · Save/Load feature 1. can save/load current generation to/from a file 2. data stored in GZIP compression format to minimise disk requirements 3. uses serialisable objects for efficiency · Fully Documented · Example Problems 1. Lawnmower (including GUI viewer) 2. Symbolic Regression · Totally Parameterised · Fully Object Oriented and Extensible · High Performance · Memory Efficient JGProg · Web site: www.linuxstart.com/~groovyjava/JGProg/ Genetic Programming (JGProg) is an open-source Java implementation of a strongly-typed Genetic Programming experimentation platform. Two example "worlds" are provided, in which a population evolves and solves the problem. 5. Alife & Complex Systems Alife takes yet another approach to exploring the mysteries of intelligence. It has many aspects similar to EC and Connectionism, but takes these ideas and gives them a meta-level twist. Alife emphasizes the development of intelligence through emergent behavior of complex adaptive systems. Alife stresses the social or group based aspects of intelligence. It seeks to understand life and survival. By studying the behaviors of groups of 'beings' Alife seeks to discover the way intelligence or higher order activity emerges from seemingly simple individuals. Cellular Automata and Conway's Game of Life are probably the most commonly known applications of this field. Complex Systems (abbreviated CS) are very similar to alife in the way the are approached, just more general in definition (ie. alife is a type of complex system). Usually complex system software takes the form of a simulator. 5.1. Alife & CS class/code libraries These are libraries of code or classes for use in programming within the artificial life field. They are not meant as stand alone applications, but rather as tools for building your own applications. CASE · Web site: www.iu.hioslo.no/~cell/ · FTP site: ftp.iu.hioslo.no/pub/ CASE (Cellular Automaton Simulation Environment) is a C++ toolkit for visualizing discrete models in two dimensions: so- called cellular automata. The aim of this project is to create an integrated framework for creating generalized cellular automata using the best, standardized technology of the day. John von Neumann Universal Constructor · Web site: alife.santafe.edu/alife/software/jvn.html · FTP site: alife.santafe.edu/pub/SOFTWARE/jvn/ The universal constructor of John von Neumann is an extension of the logical concept of universal computing machine. In the cellular environment proposed by von Neumann both computing and constructive universality can be achieved. Von Neumann proved that in his cellular lattice both a Turing machine and a machine capable of producing any other cell assembly, when fed with a suitable program, can be embedded. He called the latter machine a ''universal constructor'' and showed that, when provided with a program containing its own description, this is capable of self-reproducing. Swarm · Web site: www.santafe.edu/projects/swarm · FTP site: ftp.santafe.edu/pub/swarm The swarm Alife simulation kit. Swarm is a simulation environment which facilitates development and experimentation with simulations involving a large number of agents behaving and interacting within a dynamic environment. It consists of a collection of classes and libraries written in Objective-C and allows great flexibility in creating simulations and analyzing their results. It comes with three demos and good documentation. Swarm 1.0 is out. It requires libtclobjc and BLT 2.1 (both available at the swarm site). 5.2. Alife & CS software kits, applications, etc. These are various applications, software kits, etc. meant for research in the field of artificial life. Their ease of use will vary, as they were designed to meet some particular research interest more than as an easy to use commercial package. Avida · Web site: http://www.krl.caltech.edu/avida/home/software.html · Web site: www.krl.caltech.edu/avida/pubs/nature99/ The computer program avida is an auto-adaptive genetic system designed primarily for use as a platform in Artificial Life research. The avida system is based on concepts similar to those employed by the tierra program, that is to say it is a population of self-reproducing strings with a Turing-complete genetic basis subjected to Poisson-random mutations. The population adapts to the combination of an intrinsic fitness landscape (self-reproduction) and an externally imposed (extrinsic) fitness function provided by the researcher. By studying this system, one can examine evolutionary adaptation, general traits of living systems (such as self-organization), and other issues pertaining to theoretical or evolutionary biology and dynamic systems. BugsX · FTP site: ftp.de.uu.net/pub/research/ci/Alife/packages/bugsx/ Display and evolve biomorphs. It is a program which draws the biomorphs based on parametric plots of Fourier sine and cosine series and let's you play with them using the genetic algorithm. The Cellular Automata Simulation System · Web site: www.cs.runet.edu/~dana/ca/cellular.html The system consists of a compiler for the Cellang cellular automata programming language, along with the corresponding documentation, viewer, and various tools. Cellang has been undergoing refinement for the last several years (1991-1995), with corresponding upgrades to the compiler. Postscript versions of the tutorial and language reference manual are available for those wanting more detailed information. The most important distinguishing features of Cellang, include support for: · any number of dimensions; · compile time specification of each dimensions size; cell neighborhoods of any size (though bounded at compile time) and shape; · positional and time dependent neighborhoods; · associating multiple values (fields), including arrays, with each cell; · associating a potentially unbounded number of mobile agents [ Agents are mobile entities based on a mechanism of the same name in the Creatures system, developed by Ian Stephenson (ian@ohm.york.ac.uk).] with each cell; and · local interactions only, since it is impossible to construct automata that contain any global control or references to global variables. Cyphesis · Web site: www.worldforge.org/website/servers/cyphesis/ Cyphesis will be the AI Engine, or more plainly, the intelligence behind Worldforge (WF). Cyphesis will aims to achieve 'live' virtual worlds. Animals will have young, prey on each other and eventually die. Plants grow, flower, bear fruit and even die just as they do in real life. When completed, NPCs in Cyphesis will do all sorts of interesting things like attempt to acomplish ever-changing goals that NPCs set for themselves, gossip to PCs and other NPCs, live, die and raise children. Cyphesis aims to make NPCs act just like you and me. dblife & dblifelib · FTP site: ftp.cc.gatech.edu/ac121/linux/games/amusements/life/ dblife: Sources for a fancy Game of Life program for X11 (and curses). It is not meant to be incredibly fast (use xlife for that:-). But it IS meant to allow the easy editing and viewing of Life objects and has some powerful features. The related dblifelib package is a library of Life objects to use with the program. dblifelib: This is a library of interesting Life objects, including oscillators, spaceships, puffers, and other weird things. The related dblife package contains a Life program which can read the objects in the Library. Drone · Web site: pscs.physics.lsa.umich.edu/Software/Drone/ Drone is a tool for automatically running batch jobs of a simulation program. It allows sweeps over arbitrary sets of parameters, as well as multiple runs for each parameter set, with a separate random seed for each run. The runs may be executed either on a single computer or over the Internet on a set of remote hosts. Drone is written in Expect (an extension to the Tcl scripting language) and runs under Unix. It was originally designed for use with the Swarm agent-based simulation framework, but Drone can be used with any simulation program that reads parameters from the command line or from an input file. EBISS · Web site: www.ebiss.org/english/ EBISS is a multi-disciplinary, open, collaborative project aimed at investigating social problems by means of computational modeling and social simulations. During the past four years we have been developing SARA, a multi-agent gaming simulation platform providing for easy construction of simulations and gamings. We believe that in order to have a break-through in the difficult task of understanding real-world complex social problems, we need to gather researchers and experts with different backgrounds not only in discussion forums, but in a tighter cooperative task of building and sharing common experimental platforms. EcoLab · Web site: parallel.acsu.unsw.edu.au/rks/ecolab.html EcoLab is a system that implements an abstract ecology model. It is written as a set of Tcl/Tk commands so that the model parameters can easily be changed on the fly by means of editing a script. The model itself is written in C++. Game Of Life (GOL) · Web site: www.arrakeen.demon.co.uk/downloads.html · FTP site: metalab.unc.edu/pub/Linux/science/ai/life GOL is a simulator for conway's game of life (a simple cellular automata), and other simple rule sets. The emphasis here is on speed and scale, in other words you can setup large and fast simulations. gLife · Web site: glife.sourceforge.net · SourceForge site: sourceforge.net/project/?group_id=748 This program is similiar to "Conway's Game of Life" but yet it is very different. It takes "Conway's Game of Life" and applies it to a society (human society). This means there is a very different (and much larger) ruleset than in the original game. Things need to be taken into account such as the terrain, age, sex, culture, movement, etc Grany-3 · Web site: altern.org/gcottenc/html/grany.html Grany-3 is a full-featured cellular automaton simulator, made in C++ with Gtk--, flex++/bison++, doxygen and gettext, useful to granular media physicists. Langton's Ant · Web site: theory.org/software/ant/ Langton's Ant is an example of a finite-state cellular automata. The ant (or ants) start out on a grid. Each cell is either black or white. If the ant is on a black square, it turns right 90 and moves forward one unit. If the ant is on a white square, it turns left 90 and moves forward one unit. And when the ant leaves a square, it inverts the color. The neat thing about Langton's Ant is that no matter what pattern field you start it out on, it eventually builds a "road," which is a series of 117 steps that repeat indefinitely, each time leaving the ant displaced one pixel vertically and horizontally. LEE · Web site: dollar.biz.uiowa.edu/~fil/LEE/ · FTP site: dollar.biz.uiowa.edu/pub/fil/LEE/ LEE (Latent Energy Environments) is both an Alife model and a software tool to be used for simulations within the framework of that model. We hope that LEE will help understand a broad range of issues in theoretical, behavioral, and evolutionary biology. The LEE tool described here consists of approximately 7,000 lines of C code and runs in both Unix and Macintosh platforms. Net-Life & ZooLife · FTP site: ftp.coe.uga.edu/users/jae/alife/ *(netlife-2.0.tar.gz contains both Net-Life and ZooLife) Net-Life is a simulation of artificial-life, with neural "brains" generated via slightly random techniques. Net-Life uses artificial neural nets and evolutionary algorithms to breed artificial organisms that are similar to single cell organisms. Net-life uses asexual reproduction of its fittest individuals with a chance of mutation after each round to eventually evolve successful life-forms. ZooLife is a simulation of artificial-life. ZooLife uses probabilistic methods and evolutionary algorithms to breed artificial organisms that are similar to plant/animal zoo organisms. ZooLife uses asexual reproduction with a chance of mutation. POSES++ · Web site: www.tu-chemnitz.de/ftp- home/pub/Local/simulation/poses++/www/index.html The POSES++ software tool supports the development and simulation of models. Regarding the simulation technique models are suitable reproductions of real or planned systems for their simulative investigation. In all industrial sectors or branches POSES++ can model and simulate any arbitrary system which is based on a discrete and discontinuous behaviour. Also continuous systems can mostly be handled like discrete systems e.g., by quantity discretion and batch processing. Primordial Soup · Web site: alife.santafe.edu/alife/software/psoup.html Primordial Soup is an artificial life program. Organisms in the form of computer software loops live in a shared memory space (the "soup") and self-reproduce. The organisms mutate and evolve, behaving in accordance with the principles of Darwinian evolution. The program may be started with one or more organisms seeding the soup. Alternatively, the system may be started "sterile", with no organisms present. Spontaneous generation of self- reproducing organisms has been observed after runs as short as 15 minutes. Tierra · Web site: www.hip.atr.co.jp/~ray/tierra/tierra.html · FTP site: alife.santafe.edu/pub/SOFTWARE/Tierra/ · Alternate · FTP site: ftp.cc.gatech.edu/ac121/linux/science/biology/ Tierra's written in the C programming language. This source code creates a virtual computer and its operating system, whose architecture has been designed in such a way that the executable machine codes are evolvable. This means that the machine code can be mutated (by flipping bits at random) or recombined (by swapping segments of code between algorithms), and the resulting code remains functional enough of the time for natural (or presumably artificial) selection to be able to improve the code over time. TIN · FTP site: ftp.coe.uga.edu/users/jae/alife/ This program simulates primitive life-forms, equipped with some basic instincts and abilities, in a 2D environment consisting of cells. By mutation new generations can prove their success, and thus passing on "good family values". The brain of a TIN can be seen as a collection of processes, each representing drives or impulses to behave a certain way, depending on the state/perception of the environment ( e.g. presence of food, walls, neighbors, scent traces) These behavior process currently are : eating, moving, mating, relaxing, tracing others, gathering food and killing. The process with the highest impulse value takes control, or in other words: the tin will act according to its most urgent need. XLIFE · FTP site: ftp.cc.gatech.edu/ac121/linux/games/amusements/life/ This program will evolve patterns for John Horton Conway's game of Life. It will also handle general cellular automata with the orthogonal neighborhood and up to 8 states (it's possible to recompile for more states, but very expensive in memory). Transition rules and sample patterns are provided for the 8-state automaton of E. F. Codd, the Wireworld automaton, and a whole class of `Prisoner's Dilemma' games. Xtoys · Web site: penguin.phy.bnl.gov/www/xtoys.html xtoys contains a set of cellular automata simulators for X windows. Programs included are: · xising --- a two dimensional Ising model simulator, · xpotts --- the two dimensional Potts model, · xautomalab --- a totalistic cellular automaton simulator, · xsand --- for the Bak, Tang, Wiesenfeld sandpile model, · xwaves --- demonstrates three different wave equations, · schrodinger --- play with the Scrodinger equation in an adjustable potential. 6. Autonomous Agents Also known as intelligent software agents or just agents, this area of AI research deals with simple applications of small programs that aid the user in his/her work. They can be mobile (able to stop their execution on one machine and resume it on another) or static (live in one machine). They are usually specific to the task (and therefore fairly simple) and meant to help the user much as an assistant would. The most popular (ie. widely known) use of this type of application to date are the web robots that many of the indexing engines (eg. webcrawler) use. AgentK · FTP site: ftp.csd.abdn.ac.uk/pub/wdavies/agentk This package synthesizes two well-known agent paradigms: Agent- Oriented Programming, Shoham (1990), and the Knowledge Query & Manipulation Language, Finin (1993). The initial implementation of AOP, Agent-0, is a simple language for specifying agent behaviour. KQML provides a standard language for inter-agent communication. Our integration (which we have called Agent-K) demonstrates that Agent-0 and KQML are highly compatible. Agent- K provides the possibility of inter-operable (or open) software agents, that can communicate via KQML and which are programmed using the AOP approach. Agent · FTP site: www.cpan.org/modules/by- category/23_Miscellaneous_Modules/Agent/ The Agent is a prototype for an Information Agent system. It is both platform and language independent, as it stores contained information in simple packed strings. It can be packed and shipped across any network with any format, as it freezes itself in its current state. D'Agent (was AGENT TCL) · Web site: agent.cs.dartmouth.edu/software/agent2.0/ · FTP site: ftp.cs.dartmouth.edu/pub/agents/ A transportable agent is a program that can migrate from machine to machine in a heterogeneous network. The program chooses when and where to migrate. It can suspend its execution at an arbitrary point, transport to another machine and resume execution on the new machine. For example, an agent carrying a mail message migrates first to a router and then to the recipient's mailbox. The agent can perform arbitrarily complex processing at each machine in order to ensure that the message reaches the intended recipient. Aglets Workbench · Web site: www.trl.ibm.co.jp/aglets/ An aglet is a Java object that can move from one host on the Internet to another. That is, an aglet that executes on one host can suddenly halt execution, dispatch to a remote host, and resume execution there. When the aglet moves, it takes along its program code as well as its state (data). A built-in security mechanism makes it safe for a computer to host untrusted aglets. The Java Aglet API (J-AAPI) is a proposed public standard for interfacing aglets and their environment. J-AAPI contains methods for initializing an aglet, message handling, and dispatching, retracting, deactivating/activating, cloning, and disposing of the aglet. J-AAPI is simple, flexible, and stable. Application developers can write platform-independent aglets and expect them to run on any host that supports J-AAPI. A.L.I.C.E. · Web site: www.alicebot.org The ALICE software implements AIML (Artificial Intelligence Markup Language), a non-standard evolving markup language for creating chat robots. The primary design feature of AIML is minimalism. Compared with other chat robot languages, AIML is perhaps the simplest. The pattern matching language is very simple, for example permitting only one wild-card ('*') match character per pattern. AIML is an XML language, implying that it obeys certain grammatical meta-rules. The choice of XML syntax permits integration with other tools such as XML editors. Another motivation for XML is its familiar look and feel, especially to people with HTML experience. Ara · Web site: www.uni-kl.de/AG-Nehmer/Projekte/Ara/index_e.html Ara is a platform for the portable and secure execution of mobile agents in heterogeneous networks. Mobile agents in this sense are programs with the ability to change their host machine during execution while preserving their internal state. This enables them to handle interactions locally which otherwise had to be performed remotely. Ara's specific aim in comparison to similar platforms is to provide full mobile agent functionality while retaining as much as possible of established programming models and languages. Bee-gent · Web site: www2.toshiba.co.jp/beegent/index.htm Bee-gent is a new type of development framework in that it is a 100% pure agent system. As opposed to other systems which make only some use of agents, Bee-gent completely "Agentifies" the communication that takes place between software applications. The applications become agents, and all messages are carried by agents. Thus, Bee-gent allows developers to build flexible open distributed systems that make optimal use of existing applications. Bots · Web site: utenti.tripod.it/Claudio1977/bots.html Another AI-robot battle simulation. Utilizing probablistic logic as a machine learning technique. Written in C++ (with C++ bots). Cadaver · Web site: www.erikyyy.de/cadaver/ Cadaver is a simulated world of cyborgs and nature in realtime. The battlefield consists of forests, grain, water, grass, carcass (of course) and lots of other things. The game server manages the game and the rules. You start a server and connect some clients. The clients communicate with the server using a very primitive protocol. They can order cyborgs to harvest grain, attack enemies or cut forest. The game is not intended to be played by humans! There is too much to control. Only for die-hards: Just telnet to the server and you can enter commands by hand. Instead the idea is that you write artificial intelligence clients to beat the other artificial intelligences. You can choose a language (and operating system) of your choice to do that task. It is enough to write a program that communicates on standard input and standard output channels. Then you can use programs like "socket" to connect your clients to the server. It is NOT needed to write TCP/IP code, although i did so :) The battle shall not be boring, and so there is the so called spyboss client that displays the action graphically on screen. Dunce · Web site: www.boswa.com/boswabits/ Dunce is a simple chatterbot (conversational AI) and a language for programming such chatterbots. It uses a basic regex pattern matching and a semi-neural rule/response firing mechanism (with excitement/decay cycles). Dunce is listed about halfway down the page. FishMarket · Web site: www.iiia.csic.es/Projects/fishmarket/ FM - The FishMarket project conducted at the Artificial Intelligence Research Institute (IIIA-CSIC) attempts to contribute in that direction by developing FM, an agent-mediated electronic auction house which has been evolved into a test-bed for electronic auction markets. The framework, conceived and implemented as an extension of FM96.5 (a Java-based version of the Fishmarket auction house), allows to define trading scenarios based on fish market auctions (Dutch auctions). FM provides the framework wherein agent designers can perform controlled experimentation in such a way that a multitude of experimental market scenarios--that we regard as tournament scenarios due to the competitive nature of the domain-- of varying degrees of realism and complexity can be specified, activated, and recorded; and trading (buyer and seller) heterogeneous (human and software) agents compared, tuned and evaluated. Hive · Web site: www.hivecell.net/ Hive is a Java software platform for creating distributed applications. Using Hive, programmers can easily create systems that connect and use data from all over the Internet. At its heart, Hive is an environment for distributed agents to live, communicating and moving to fulfill applications. We are trying to make the Internet alive. Jade · Web site: sharon.cselt.it/projects/jade/ JADE (Java Agent DEvelopment Framework) is a software framework fully implemented in Java language. It simplifies the implementation of multi-agent systems through a middle-ware that claims to comply with the FIPA specifications and through a set of tools that supports the debugging and deployment phase. The agent platform can be distributed across machines (which not even need to share the same OS) and the configuration can be controlled via a remote GUI. The configuration can be even changed at run-time by moving agents from one machine to another one, as and when required. JAFMAS · Web site: www.ececs.uc.edu/~abaker/JAFMAS JAFMAS provides a framework to guide the coherent development of multiagent systems along with a set of classes for agent deployment in Java. The framework is intended to help beginning and expert developers structure their ideas into concrete agent applications. It directs development from a speech-act perspective and supports multicast and directed communication, KQML or other speech-act performatives and analysis of multiagent system coherency and consistency. Only four of the provided Java classes must be extended for any application. Provided examples of the N-Queens and Supply Chain Integration use only 567 and 1276 lines of additional code respectively for implementation. JAM Agent · Web site: members.home.net/marcush/IRS/ JAM supports both top-down, goal-based reasoning and bottom-up data-driven reasoning. JAM selects goals and plans based on maximal priority if metalevel reasoning is not used, or user- developed metalevel reasoning plans if they exist. JAM's conceptualization of goals and goal achievement is more classically defined (UMPRS is more behavioral performance-based than truly goal-based) and makes the distinction between plans to achieve goals and plans that simply encode behaviors. Goal- types implemented include achievement (attain a specified world state), maintenance (re-attain a specified world state), and performance. Execution of multiple simultaneous goals are supported, with suspension and resumption capabilities for each goal (i.e., intention) thread. JAM plans have explicit precondition and runtime attributes that restrict their applicability, a postcondition attribute, and a plan attributes section for specifying plan/domain-specific plan features. Available plan constructs include: sequencing, iteration, subgoaling, atomic (i.e., non-interruptable) plan segments, n- branch deterministic and non-deterministic conditional execution, parallel execution of multiple plan segments, goal- based or world state-based synchronization, an explicit failure- handling section, and Java primitive function definition through building it into JAM as well as the invocation of predefined (i.e., legacy) class members via Java's reflection capabilities without having to build it into JAM. JATLite · Web site: java.stanford.edu/java_agent/html/ JATLite is providing a set of java packages which makes easy to build multi-agent systems using Java. JATLite provides only light-weight, small set of packages so that the developers can handle all the packages with little efforts. For flexibility JATLite provides four different layers from abstract to Router implementation. A user can access any layer we are providing. Each layer has a different set of assumptions. The user can choose an appropriate layer according to the assumptions on the layer and user's application. The introduction page contains JATLite features and the set of assumptions for each layer. JATLiteBeans · Web site: waitaki.otago.ac.nz/JATLiteBean/ · Improved, easier-to-use interface to JATLite features including KQML message parsing, receiving, and sending. · Extensible architecture for message handling and agent "thread of control" management · Useful functions for parsing of simple KQML message content · JATLiteBean supports automatic advertising of agent capabilities to facilitator agents · Automatic, optional, handling of the "forward" performative · Generic configuration file parser · KQML syntax checker Java(tm) Agent Template · Web site: cdr.stanford.edu/ABE/JavaAgent.html The JAT provides a fully functional template, written entirely in the Java language, for constructing software agents which communicate peer-to-peer with a community of other agents distributed over the Internet. Although portions of the code which define each agent are portable, JAT agents are not migratory but rather have a static existence on a single host. This behavior is in contrast to many other "agent" technologies. (However, using the Java RMI, JAT agents could dynamically migrate to a foreign host via an agent resident on that host). Currently, all agent messages use KQML as a top-level protocol or message wrapper. The JAT includes functionality for dynamically exchanging "Resources", which can include Java classes (e.g. new languages and interpreters, remote services, etc.), data files and information inlined into the KQML messages. Java-To-Go · Web site: ptolemy.eecs.berkeley.edu/dgm/javatools/java-to-go/ Java-To-Go is an experimental infrastructure that assists in the development and experimentation of mobile agents and agent-based applications for itinerative computing (itinerative computing: the set of applications that requires site-to-site computations. The main emphasis here is on a easy-to-setup environment that promotes quick experimentation on mobile agents. Kafka · Web site: www.fujitsu.co.jp/hypertext/free/kafka/ Kafka is yet another agent library designed for constructing multi-agent based distributed applications. Kafka is a flexible, extendable, and easy-to-use java class library for programmers who are familiar with distributed programming. It is based on Java's RMI and has the following added features: · Runtime Reflection: Agents can modify their behaviour (program codes) at runtime. The behaviour of the agent is represented by an abstract class Action. It is useful for remote maintenance or installation services. · Remote Evaluation: Agents can receive and evaluate program codes (classes) with or without the serialized object. Remote evaluation is a fundamental function of a mobile agent and is thought to be a push model of service delivery. · Distributed Name Service: Agents have any number of logical names that don't contain the host name. These names can be managed by the distributed directories. · Customizable security policy: a very flexible, customizable, 3-layered security model is implemented in Kafka. · 100% Java and RMI compatible: Kafka is written completely in Java. Agent is a Java RMI server object itself. So, agents can directly communicate with other RMI objects. Khepera Simulator · Web site: diwww.epfl.ch/lami/team/michel/khep-sim/ Khepera Simulator is a public domain software package written by Olivier MICHEL during the preparation of his Ph.D. thesis, at the Laboratoire I3S, URA 1376 of CNRS and University of Nice- Sophia Antipolis, France. It allows to write your own controller for the mobile robot Khepera using C or C++ languages, to test them in a simulated environment and features a nice colorful X11 graphical interface. Moreover, if you own a Khepera robot, it can drive the real robot using the same control algorithm. It is mainly oriented toward to researchers studying autonomous agents. lyntin · Web site: lyntin.sourceforge.net/ Lyntin is an extensible Mud client and framework for the creation of autonomous agents, or bots, as well as mudding in general. Lyntin is centered around Python, a dynamic, object- oriented, and fun programming language and based on TinTin++ a lovely mud client. Mole · Web site: mole.informatik.uni-stuttgart.de/ Mole is an agent system supporting mobile agents programmed in Java. Mole's agents consist of a cluster of objects, which have no references to the outside, and as a whole work on tasks given by the user or another agent. They have the ability to roam a network of "locations" autonomously. These "locations" are an abstraction of real, existing nodes in the underlying network. They can use location-specific resources by communicating with dedicated agents representing these services. Agents are able to use services provided by other agents and to provide services as well. Penguin! · FTP site: www.perl.org/CPAN/modules/by- category/23_Miscellaneous_Modules/Penguin/FSG/ Penguin is a Perl 5 module. It provides you with a set of functions which allow you to: · send encrypted, digitally signed Perl code to a remote machine to be executed. · receive code and, depending on who signed it, execute it in an arbitrarily secure, limited compartment. The combination of these functions enable direct Perl coding of algorithms to handle safe internet commerce, mobile information- gathering agents, "live content" web browser helper apps, distributed load-balanced computation, remote software update, distance machine administration, content-based information propagation, Internet-wide shared-data applications, network application builders, and so on. RealTimeBattle · Web site: www.lysator.liu.se/realtimebattle/ RealTimeBattle is a programming game, in which robots controlled by programs are fighting each other. The goal is to destroy the enemies, using the radar to examine the environment and the cannon to shoot. · Game progresses in real time, with the robot programs running as child processes to RealTimeBattle. · The robots communicate with the main program using the standard input and output. · Robots can be constructed in almost any programming language. · Maximum number of robots can compete simultaneously. · A simple messaging language is used for communication, which makes it easy to start constructing robots. · Robots behave like real physical object. · You can create your own arenas. · Highly configurable. Remembrance Agents · Web site: rhodes.www.media.mit.edu/people/rhodes/RA/ Remembrance Agents are a set of applications that watch over a user's shoulder and suggest information relevant to the current situation. While query-based memory aids help with direct recall, remembrance agents are an augmented associative memory. For example, the word-processor version of the RA continuously updates a list of documents relevant to what's being typed or read in an emacs buffer. These suggested documents can be any text files that might be relevant to what you are currently writing or reading. They might be old emails related to the mail you are currently reading, or abstracts from papers and newspaper articles that discuss the topic of your writing. SimRobot · Web site: www.informatik.uni-bremen.de/~simrobot/ · FTP site: ftp.uni-bremen.de/pub/ZKW/INFORM/simrobot/ SimRobot is a program for simulation of sensor based robots in a 3D environment. It is written in C++, runs under UNIX and X11 and needs the graphics toolkit XView. · Simulation of robot kinematics · Hierarchically built scene definition via a simple definition language · Various sensors built in: camera, facette eye, distance measurement, light sensor, etc. · Objects defined as polyeders · Emitter abstractly defined; can be interpreted e.g. as light or sound · Camera images computed according to the raytracing or Z-buffer algorithms known from computer graphics · Specific sensor/motor software interface for communicating with the simulation · Texture mapping onto the object surfaces: bitmaps in various formats · Comprehensive visualization of the scene: wire frame w/o hidden lines, sensor and actor values · Interactive as well as batch driven control of the agents and operation in the environment · Collision detection · Extendability with user defined object types · Possible socket communication to e.g. the Khoros image processing software Sulawesi · Web site: wearables.essex.ac.uk/sulawesi/ A framework called Sulawesi has been designed and implemented to tackle what has been considered to be important challenges in a wearable user interface. The ability to accept input from any number of modalities, and perform if necessary a translation to any number of modal outputs. It does this primarily through a set of proactive agents to act on the input. TclRobots · FTP site: ftp.neosoft.com/pub/tcl/sorted/games/tclrobots-2.0/ · Redhat Patch: ftp.coe.uga.edu/users/jae/ai/tclrobots- redhat.patch · RPMs (search at): http://rufus.w3.org/ TclRobots is a programming game, similar to 'Core War'. To play TclRobots, you must write a Tcl program that controls a robot. The robot's mission is to survive a battle with other robots. Two, three, or four robots compete during a battle, each running different programs (or possibly the same program in different robots.) Each robot is equipped with a scanner, cannon, drive mechanism. A single match continues until one robot is left running. Robots may compete individually, or combine in a team oriented battle. A tournament can be run with any number of robot programs, each robot playing every other in a round-robin fashion, one-on-one. A battle simulator is available to help debug robot programs. The TclRobots program provides a physical environment, imposing certain game parameters to which all robots must adhere. TclRobots also provides a view on a battle, and a controlling user interface. TclRobots requirements: a wish interpreter built from Tcl 7.4 and Tk 4.0. TKQML · Web site: www.csee.umbc.edu/tkqml/ TKQML is a KQML application/addition to Tcl/Tk, which allows Tcl based systems to communicate easily with a powerful agent communication language. The Tocoma Project · Web site: www.tacoma.cs.uit.no/ An agent is a process that may migrate through a computer network in order to satisfy requests made by clients. Agents are an attractive way to describe network-wide computations. The TACOMA project focuses on operating system support for agents and how agents can be used to solve problems traditionally addressed by operating systems. We have implemented a series of prototype systems to support agents. TACOMA Version 1.2 is based on UNIX and TCP. The system supports agents written in C, Tcl/Tk, Perl, Python, and Scheme (Elk). It is implemented in C. This TACOMA version has been in public domain since April 1996. We are currently focusing on heterogeneity, fault-tolerance, security and management issues. Also, several TACOMA applications are under construction. We implemented StormCast 4.0, a wide-area network weather monitoring system accessible over the internet, using TACOMA and Java. We are now in the process of evaluating this application, and plan to build a new StormCast version to be completed by June 1997. UMPRS Agent · Web site: members.home.net/marcush/IRS/ UMPRS supports top-down, goal-based reasoning and selects goals and plans based on maximal priority. Execution of multiple simultaneous goals are supported, with suspension and resumption capabilities for each goal (i.e., intention) thread. UMPRS plans have an integrated precondition/runtime attribute that constrain their applicability. Available plan constructs include: sequencing, iteration, subgoaling, atomic (i.e., non- interruptable) blocks, n-branch deterministic conditional execution, explicit failure-handling section, and C++ primitive function definition. Virtual Secretary Project (ViSe) (Tcl/Tk) · Web site: www.cs.uit.no/DOS/Virt_Sec The motivation of the Virtual Secretary project is to construct user-model-based intelligent software agents, which could in most cases replace human for secretarial tasks, based on modern mobile computing and computer network. The project includes two different phases: the first phase (ViSe1) focuses on information filtering and process migration, its goal is to create a secure environment for software agents using the concept of user models; the second phase (ViSe2) concentrates on agents' intelligent and efficient cooperation in a distributed environment, its goal is to construct cooperative agents for achieving high intelligence. (Implemented in Tcl/TclX/Tix/Tk) VWORLD · Web site: zhar.net/gnu-linux/projects/vworld/ Vworld is a simulated environment for research with autonomous agents written in prolog. It is currently in something of an beta stage. It works well with SWI-prolog, but should work with Quitnus-prolog with only a few changes. It is being designed to serve as an educational tool for class projects dealing with prolog and autonomous agents. It comes with three demo worlds or environments, along with sample agents for them. There are two versions now. One written for SWI-prolog and one written for LPA-prolog. Documentation is roughly done (with a student/professor framework in mind), and a graphical interface is planned. WebMate · Web site: www.cs.cmu.edu/~softagents/webmate/ WebMate is a personal agent for World-Wide Web browsing and searching. It accompanies you when you travel on the internet and provides you what you want. Features include: · Searching enhancement, including parallel search, searching keywords refinement using our relevant keywords extraction technology, relevant feedback, etc. · Browsing assistant, including learning your current interesting, recommending you new URLs according to your profile and selected resources, monitoring bookmarks of Netscape or IE, sending the current browsing page to your friends, etc. · Offline browsing, including downloading the following pages from the current page for offline browsing. · Filtering HTTP header, including recording http header and all the transactions between your browser and WWW servers, etc. · Checking the HTML page to find the errors or dead links, etc. · Programming in Java, independent of operating system, runing in multi-thread. Zeus · Web site: www.labs.bt.com/projects/agents/zeus/ The construction of multi-agent systems involves long development times and requires solutions to some considerable technical difficulties. This has motivated the development of the ZEUS toolkit, which provides a library of software components and tools that facilitate the rapid design, development and deployment of agent system 7. Programming languages While any programming language can be used for artificial intelligence/life research, these are programming languages which are used extensively for, if not specifically made for, artificial intelligence programming. Allegro CL · Web site: www.franz.com Franz Inc's free linux version of their lisp development environment. You can download it or they will mail you a CD free (you don't even have to pay for shipping). It is generally considered to be one of the better lisp platforms. APRIL · Web site: sourceforge.net/project/?group_id=3173 APRIL is a symbolic programming language that is designed for writing mobile, distributed and agent-based systems especially in an Internet environment. It has advanced features such as a macro sub-language, asynchronous message sending and receiving, code mobility, pattern matching, higher-order functions and strong typing. The language is compiled to byte-code which is then interpreted by the APRIL runtime-engine. APRIL now requires the InterAgent Communications Model (ICM) to be installed before it can be installed. [Ed. ICM can be found at the same web site] B-Prolog · Web site: www.sci.brooklyn.cuny.edu/~zhou/bprolog.html · Web site: www.cad.mse.kyutech.ac.jp/people/zhou/bprolog.html B-Prolog is a compact and complete CLP system that runs Prolog and CLP(FD) programs. An emulator-based system, B-Prolog has a performance comparable with SICStus-Prolog. · In addition to Edinburgh-style programs, B-Prolog accepts canonical-form programs that can be compiled into more compact and faster code than standard Prolog programs. · B-Prolog includes an interpreter and provides an interactive interface through which users can consult, list, compile, load, debug and run programs. The command editor facilitates reuse old commands. · B-Prolog provides a bi-directional interface with C and Java.> resources in C and Java such as Graphics and sockets, and also makes it possible for a Prolog program to be embadded in a C and Java applications. · B-Prolog supports most of the built-ins in ISO Prolog. · B-Prolog supports the delaying (co-routining) mechanism, which can be used to implement concurrency, test-and-generate search algorithms, and most importantly constraint propagation algorithms. · B-Prolog has an efficient constraint compiler for constraints> over finite-domains and Booleans. · B-Prolog supports the tabling mechanism, which has proven effective for applications including parsing, problem solving, theorem proving, and deductive databases. DHARMI · Web site: http://megazone.bigpanda.com/~wolf/DHARMI/ DHARMI is a high level spatial, tinker-toy like language who's components are transparently administered by a background process called the Habitat. As the name suggests, the language was designed to make modelling prototypes and handle living data. Programs can be modified while running. This is accomplished by blurring the distinction between source code, program, and data. ECoLisp · Web site (???): www.di.unipi.it/~attardi/software.html ECoLisp (Embeddable Common Lisp) is an implementation of Common Lisp designed for being embeddable into C based applications. ECL uses standard C calling conventions for Lisp compiled functions, which allows C programs to easily call Lisp functions and viceversa. No foreign function interface is required: data can be exchanged between C and Lisp with no need for conversion. ECL is based on a Common Runtime Support (CRS) which provides basic facilities for memory managment, dynamic loading and dumping of binary images, support for multiple threads of execution. The CRS is built into a library that can be linked with the code of the application. ECL is modular: main modules are the program development tools (top level, debugger, trace, stepper), the compiler, and CLOS. A native implementation of CLOS is available in ECL: one can configure ECL with or without CLOS. A runtime version of ECL can be built with just the modules which are required by the application. The ECL compiler compiles from Lisp to C, and then invokes the GCC compiler to produce binaries. ESTEREL · Web site: www-sop.inria.fr/meije/esterel/ Esterel is both a programming language, dedicated to programming reactive systems, and a compiler which translates Esterel programs into finite-state machines. It is particularly well- suited to programming reactive systems, including real-time systems and control automata. Only the binary is available for the language compiler. :P Gödel · Web page: www.cs.bris.ac.uk/~bowers/goedel.html Gödel is a declarative, general-purpose programming language in the family of logic programming languages. It is a strongly typed language, the type system being based on many-sorted logic with parametric polymorphism. It has a module system. Gödel supports infinite precision integers, infinite precision rationals, and also floating-point numbers. It can solve constraints over finite domains of integers and also linear rational constraints. It supports processing of finite sets. It also has a flexible computation rule and a pruning operator which generalizes the commit of the concurrent logic programming languages. Considerable emphasis is placed on Gödel's meta- logical facilities which provide significant support for meta- programs that do analysis, transformation, compilation, verification, debugging, and so on. LIFE · Web page: www.isg.sfu.ca/life LIFE (Logic, Inheritance, Functions, and Equations) is an experimental programming language proposing to integrate three orthogonal programming paradigms proven useful for symbolic computation. From the programmer's standpoint, it may be perceived as a language taking after logic programming, functional programming, and object-oriented programming. From a formal perspective, it may be seen as an instance (or rather, a composition of three instances) of a Constraint Logic Programming scheme due to Hoehfeld and Smolka refining that of Jaffar and Lassez. CLisp (Lisp) · Web page: clisp.sourceforge.net · FTP site: clisp.cons.org/pub/lisp/clisp/source CLISP is a Common Lisp implementation by Bruno Haible and Michael Stoll. It mostly supports the Lisp described by Common LISP: The Language (2nd edition) and the ANSI Common Lisp standard. CLISP includes an interpreter, a byte-compiler, a large subset of CLOS (Object-Oriented Lisp) , a foreign language interface and, for some machines, a screen editor. The user interface language (English, German, French) is chosen at run time. Major packages that run in CLISP include CLX & Garnet. CLISP needs only 2 MB of memory. CMU Common Lisp · Web page: www.cons.org/cmucl/ · Old Web page: www.mv.com/users/pw/lisp/index.html · FTP site: ftp2.cons.org/pub/languages/lisp/cmucl/release/ · Linux Installation: www.telent.net/lisp/howto.html CMU Common Lisp is a public domain "industrial strength" Common Lisp programming environment. Many of the X3j13 changes have been incorporated into CMU CL. Wherever possible, this has been done so as to transparently allow the use of either CLtL1 or proposed ANSI CL. Probably the new features most interesting to users are SETF functions, LOOP and the WITH-COMPILATION-UNIT macro. GCL (Lisp) · FTP site: ftp.ma.utexas.edu/pub/gcl/ GNU Common Lisp (GCL) has a compiler and interpreter for Common Lisp. It used to be known as Kyoto Common Lisp. It is very portable and extremely efficient on a wide class of applications. It compares favorably in performance with commercial Lisps on several large theorem-prover and symbolic algebra systems. It supports the CLtL1 specification but is moving towards the proposed ANSI definition. GCL compiles to C and then uses the native optimizing C compilers (e.g., GCC). A function with a fixed number of args and one value turns into a C function of the same number of args, returning one value, so GCL is maximally efficient on such calls. It has a conservative garbage collector which allows great freedom for the C compiler to put Lisp values in arbitrary registers. It has a source level Lisp debugger for interpreted code, with display of source code in an Emacs window. Its profiling tools (based on the C profiling tools) count function calls and the time spent in each function. GNU Prolog · Web site: pauillac.inria.fr/~diaz/gnu-prolog/ · Web site: www.gnu.org/software/prolog/prolog.html GNU Prolog is a free Prolog compiler with constraint solving over finite domains developed by Daniel Diaz. GNU Prolog accepts Prolog+constraint programs and produces native binaries (like gcc does from a C source). The obtained executable is then stand-alone. The size of this executable can be quite small since GNU Prolog can avoid to link the code of most unused built-in predicates. The performances of GNU Prolog are very encouraging (comparable to commercial systems). Beside the native-code compilation, GNU Prolog offers a classical interactive interpreter (top-level) with a debugger. The Prolog part conforms to the ISO standard for Prolog with many extensions very useful in practice (global variables, OS interface, sockets,...). GNU Prolog also includes an efficient constraint solver over Finite Domains (FD). This opens contraint logic pogramming to the user combining the power of constraint programming to the declarativity of logic programming. Mercury · Web page: www.cs.mu.oz.au/research/mercury/ Mercury is a new, purely declarative logic programming language. Like Prolog and other existing logic programming languages, it is a very high-level language that allows programmers to concentrate on the problem rather than the low-level details such as memory management. Unlike Prolog, which is oriented towards exploratory programming, Mercury is designed for the construction of large, reliable, efficient software systems by teams of programmers. As a consequence, programming in Mercury has a different flavor than programming in Prolog. Mozart · Web page: www.mozart-oz.org/ The Mozart system provides state-of-the-art support in two areas: open distributed computing and constraint-based inference. Mozart implements Oz, a concurrent object-oriented language with dataflow synchronization. Oz combines concurrent and distributed programming with logical constraint-based inference, making it a unique choice for developing multi-agent systems. Mozart is an ideal platform for both general-purpose distributed applications as well as for hard problems requiring sophisticated optimization and inferencing abilities. We have developed applications in scheduling and time-tabling, in placement and configuration, in natural language and knowledge representation, multi-agent systems and sophisticated collaborative tools. SWI Prolog · Web page: www.swi.psy.uva.nl/projects/SWI-Prolog/ · FTP site: swi.psy.uva.nl/pub/SWI-Prolog/ SWI is a free version of prolog in the Edinburgh Prolog family (thus making it very similar to Quintus and many other versions). With: a large library of built in predicates, a module system, garbage collection, a two-way interface with the C language, plus many other features. It is meant as a educational language, so it's compiled code isn't the fastest. Although it similarity to Quintus allows for easy porting. XPCE is freely available in binary form for the Linux version of SWI-prolog. XPCE is an object oriented X-windows GUI development package/environment. Kali Scheme · Web site: www.neci.nj.nec.com/PLS/Kali.html Kali Scheme is a distributed implementation of Scheme that permits efficient transmission of higher-order objects such as closures and continuations. The integration of distributed communication facilities within a higher-order programming language engenders a number of new abstractions and paradigms for distributed computing. Among these are user-specified load- balancing and migration policies for threads, incrementally- linked distributed computations, agents, and parameterized client-server applications. Kali Scheme supports concurrency and communication using first-class procedures and continuations. It integrates procedures and continuations into a message-based distributed framework that allows any Scheme object (including code vectors) to be sent and received in a message. RScheme · Web site:www.rscheme.org · FTP site: ftp.rscheme.org/pub/rscheme/ RScheme is an object-oriented, extended version of the Scheme dialect of Lisp. RScheme is freely redistributable, and offers reasonable performance despite being extraordinarily portable. RScheme can be compiled to C, and the C can then compiled with a normal C compiler to generate machine code. By default, however, RScheme compiles to bytecodes which are interpreted by a (runtime) virtual machine. This ensures that compilation is fast and keeps code size down. In general, we recommend using the (default) bytecode code generation system, and only compiling your time-critical code to machine code. This allows a nice adjustment of space/time tradeoffs. (see web site for details) Scheme 48 · Web site: www.neci.nj.nec.com/homepages/kelsey/ Scheme 48 is a Scheme implementation based on a virtual machine architecture. Scheme 48 is designed to be straightforward, flexible, reliable, and fast. It should be easily portable to 32-bit byte-addressed machines that have POSIX and ANSI C support. In addition to the usual Scheme built-in procedures and a development environment, library software includes support for hygienic macros (as described in the Revised^4 Scheme report), multitasking, records, exception handling, hash tables, arrays, weak pointers, and FORMAT. Scheme 48 implements and exploits an experimental module system loosely derived from Standard ML and Scheme Xerox. The development environment supports interactive changes to modules and interfaces. SCM (Scheme) · Web site: www-swiss.ai.mit.edu/~jaffer/SCM.html · FTP site: swiss-ftp.ai.mit.edu:/archive/scm/ SCM conforms to the Revised^4 Report on the Algorithmic Language Scheme and the IEEE P1178 specification. Scm is written in C. It uses the following utilities (all available at the ftp site). · SLIB (Standard Scheme Library) is a portable Scheme library which is intended to provide compatibility and utility functions for all standard Scheme implementations, including SCM, Chez, Elk, Gambit, MacScheme, MITScheme, scheme->C, Scheme48, T3.1, and VSCM, and is available as the file slib2c0.tar.gz. Written by Aubrey Jaffer. · JACAL is a symbolic math system written in Scheme, and is available as the file jacal1a7.tar.gz. · Interfaces to standard libraries including REGEX string regular expression matching and the CURSES screen management package. · Available add-on packages including an interactive debugger, database, X-window graphics, BGI graphics, Motif, and Open- Windows packages. · A compiler (HOBBIT, available separately) and dynamic linking of compiled modules. Shift · Web site: www.path.berkeley.edu/shift/ Shift is a programming language for describing dynamic networks of hybrid automata. Such systems consist of components which can be created, interconnected and destroyed as the system evolves. Components exhibit hybrid behavior, consisting of continuous-time phases separated by discrete-event transitions. Components may evolve independently, or they may interact through their inputs, outputs and exported events. The interaction network itself may evolve. YAP Prolog · Web site: www.ncc.up.pt/~vsc/Yap/ YAP is a high-performance Prolog compiler developed at LIACC/Universidade do Porto. Its Prolog engine is based in the WAM (Warren Abstract Machine), with several optimizations for better performance. YAP follows the Edinburgh tradition, and is largely compatible with DEC-10 Prolog, Quintus Prolog, and especially with C-Prolog. Work on the more recent version of YAP strives at several goals: · Portability: The whole system is now written in C. YAP compiles in popular 32 bit machines, such as Suns and Linux PCs, and in a 64 bit machines, the Alphas running OSF Unix and Linux. · Performance: We have optimised the emulator to obtain performance comparable to or better than well-known Prolog systems. In fact, the current version of YAP performs better than the original one, written in assembly language. · Robustness: We have tested the system with a large array of Prolog applications. · Extensibility: YAP was designed internally from the beginning to encapsulate manipulation of terms. These principles were used, for example, to implement a simple and powerful C-interface. The new version of YAP extends these principles to accomodate extensions to the unification algorithm, that we believe will be useful to implement extensions such as constraint programming. · Completeness: YAP has for a long time provided most builtins expected from a Edinburgh Prolog implementation. These include I/O functionality, data-base operations, and modules. Work on YAP aims now at being compatible with the Prolog standard. · Openess: We would like to make new development of YAP open to the user community. · Research: YAP has been a vehicle for research within and outside our group. Currently research is going on on parallelisation and tabulation, and we have started work to support constraint handling. Linux AX25-HOWTO, Amateur Radio. Terry Dawson, VK2KTJ, terry@perf.no.itg.telstra.com.au v1.5, 17 October 1997 The Linux Operating System is perhaps the only operating system in the world that can boast native and standard support for the AX.25 packet radio protocol utilised by Amateur Radio Operators worldwide. This document aims to describe how to install and configure this support. ______________________________________________________________________ Table of Contents 1. Introduction. 1.1 Changes from the previous version 1.2 Where to obtain new versions of this document. 1.3 Other related documentation. 2. The Packet Radio Protocols and Linux. 2.1 How it all fits together. 3. The AX.25/NetRom/Rose software components. 3.1 Finding the kernel, tools and utility packages. 3.1.1 The kernel source: 3.1.2 The network tools: 3.1.3 The AX25 utilities: 4. Installing the AX.25/NetRom/Rose software. 4.1 Compiling the kernel. 4.1.1 A word about Kernel modules 4.1.2 What's new in 2.0.*+ModuleXX or 2.1.* Kernels ? 4.2 The network configuration tools. 4.2.1 A patch kit that adds Rose support and fixes some bugs. 4.2.2 Building the standard net-tools release. 4.3 The AX.25 user and utility programs. 5. A note on callsigns, addresses and things before we start. 5.1 What are all those T1, T2, N2 and things ? 5.2 Run time configurable parameters 6. Configuring an AX.25 port. 6.1 Creating the AX.25 network device. 6.1.1 Creating a KISS device. 6.1.1.1 Configuring for Dual Port TNC's 6.1.2 Creating a Baycom device. 6.1.3 Configuring the AX.25 channel access parameters. 6.1.3.1 Configuring the Kernel AX.25 to use the BayCom device 6.1.4 Creating a SoundModem device. 6.1.4.1 Configuring the sound card. 6.1.4.2 Configuring the SoundModem driver. 6.1.4.3 Configuring the AX.25 channel access parameters. 6.1.4.4 Setting the audio levels and tuning the driver. 6.1.4.5 Configuring the Kernel AX.25 to use the SoundModem 6.1.5 Creating a PI card device. 6.1.6 Creating a PacketTwin device. 6.1.7 Creating a generic SCC device. 6.1.7.1 Obtaining and building the configuration tool package. 6.1.7.2 Configuring the driver for your card. 6.1.7.2.1 Configuration of the hardware parameters. 6.1.7.3 Channel Configuration 6.1.7.4 Using the driver. 6.1.7.5 The 6.1.8 Creating a BPQ ethernet device. 6.1.9 Configuring the BPQ Node to talk to the Linux AX.25 support. 6.2 Creating the 6.3 Configuring AX.25 routing. 7. Configuring an AX.25 interface for TCP/IP. 8. Configuring a NetRom port. 8.1 Configuring 8.2 Configuring 8.3 Creating the NetRom Network device 8.4 Starting the NetRom daemon 8.5 Configuring NetRom routing. 9. Configuring a NetRom interface for TCP/IP. 10. Configuring a Rose port. 10.1 Configuring 10.2 Creating the Rose Network device. 10.3 Configuring Rose Routing 11. Making AX.25/NetRom/Rose calls. 12. Configuring Linux to accept Packet connections. 12.1 Creating the 12.2 A simple example 12.3 Starting 13. Configuring the 13.1 Creating the 13.2 Creating the 13.3 Configuring 13.4 Configuring 14. Configuring 14.1 Creating the 15. Configuring the 15.1 Create the 15.2 Create the 15.3 Associate AX.25 callsigns with system users. 15.4 Add the PMS to the 15.5 Test the PMS. 16. Configuring the 17. Configuring the Rose Uplink and Downlink commands 17.1 Configuring a Rose downlink 17.2 Configuring a Rose uplink 18. Associating AX.25 callsigns with Linux users. 19. The 20. AX.25, NetRom, Rose network programming. 20.1 The address families. 20.2 The header files. 20.3 Callsign mangling and examples. 21. Some sample configurations. 21.1 Small Ethernet LAN with Linux as a router to Radio LAN 21.2 IPIP encapsulated gateway configuration. 21.3 AXIP encapsulated gateway configuration 21.3.1 AXIP configuration options. 21.3.2 A typical 21.3.3 Running 21.3.4 Some notes about the routes and route flags 21.4 Linking NOS and Linux using a pipe device 22. Where do I find more information about .... ? 22.1 Packet Radio 22.2 Protocol Documentation 22.3 Hardware Documentation 23. Discussion relating to Amateur Radio and Linux. 24. Acknowledgements. 25. Copyright. ______________________________________________________________________ 1. Introduction. This document was originally an appendix to the HAM-HOWTO, but grew too large to be reasonably managed in that fashion. This document describes how to install and configure the native AX.25, NetRom and Rose support for Linux. A few typical configurations are described that could be used as models to work from. The Linux implementation of the amateur radio protocols is very flexible. To people relatively unfamiliar with the Linux operating system the configuration process may look daunting and complicated. It will take you a little time to come to understand how the whole thing fits together. You will find configuration very difficult if you have not properly prepared yourself by learning about Linux in general. You cannot expect to switch from some other environment to Linux without learning about Linux itself. 1.1. Changes from the previous version Additions: Joerg Reuters Web Page "More Information" section ax25ipd configuration. Corrections/Updates: Changed pty's to a safer range to prevent possible conflicts Updated module and ax25-utils versions. ToDo: Fix up the SCC section, this is probably wrong. Expand on the programming section. 1.2. Where to obtain new versions of this document. The best place to obtain the latest version of this document is from a Linux Documentation Project archive. The Linux Documentation Project runs a Web Server and this document appears there as the AX25-HOWTO . This document is also available in various formats from the sunsite.unc.edu ftp archive . You can always contact me, but I pass new versions of the document directly to the LDP HOWTO coordinator, so if it isn't there then chances are I haven't finished it. 1.3. Other related documentation. There is a lot of related documentation. There are many documents that relate to Linux networking in more general ways and I strongly recommend you also read these as they will assist you in your efforts and provide you with stronger insight into other possible configurations. They are: The HAM-HOWTO , the NET-3-HOWTO , the Ethernet-HOWTO , and: the Firewall-HOWTO More general Linux information may be found by reference to other Linux HOWTO documents. 2. The Packet Radio Protocols and Linux. The AX.25 protocol offers both connected and connectionless modes of operation, and is used either by itself for point-point links, or to carry other protocols such as TCP/IP and NetRom. It is similar to X.25 level 2 in structure, with some extensions to make it more useful in the amateur radio environment. The NetRom protocol is an attempt at a full network protocol and uses AX.25 at its lowest layer as a datalink protocol. It provides a network layer that is an adapted form of AX.25. The NetRom protocol features dynamic routing and node aliases. The Rose protocol was conceived and first implemented by Tom Moulton W2VY and is an implementation of the X.25 packet layer protocol and is designed to operate with AX.25 as its datalink layer protocol. It too provides a network layer. Rose addresses take the form of 10 digit numbers. The first four digits are called the Data Network Identification Code (DNIC) and are taken from Appendix B of the CCITT X.121 recommendation. More information on the Rose protocol may be ontained from the RATS Web server . Alan Cox developed some early kernel based AX.25 software support for Linux. Jonathon Naylor has taken up ongoing development of the code, has added NetRom and Rose support and is now the developer of the AX.25 related kernel code. DAMA support was developed by Joerg, DL1BKE, jreuter@poboxes.com. Baycom and SoundModem support were added by Thomas Sailer, . The AX.25 utility software is now maintained by me. The Linux code supports KISS based TNC's (Terminal Node Controllers), the Ottawa PI card, the Gracilis PacketTwin card and other Z8530 SCC based cards with the generic SCC driver and both the Parallel and Serial port Baycom modems. Thomas's new soundmodem driver supports Soundblaster and soundcards based on the Crystal chipset. The User programs contain a simple PMS (Personal Message System), a beacon facility, a line mode connect program, `listen' an example of how to capture all AX.25 frames at raw interface level and programs to configure the NetRom protocol. Included also are an AX.25 server style program to handle and despatch incoming AX.25 connections and a NetRom daemon which does most of the hard work for NetRom support. 2.1. How it all fits together. The Linux AX.25 implementation is a brand new implementation. While in many ways it may looks similar to NOS, or BPQ or other AX.25 implementations, it is none of these and is not identical to any of them. The Linux AX.25 implementation is capable of being configured to behave almost identically to other implementations, but the configuration process is very different. To assist you in understanding how you need to think when configuring this section describes some of the structural features of the AX.25 implementation and how it fits into the context of the overall Linux structure. Simplified Protocol Layering Diagram ----------------------------------------------- | AF_AX25 | AF_NETROM | AF_INET | AF_ROSE | |=========|===========|=============|=========| | | | | | | | | TCP/IP | | | | ---------- | | | | NetRom | | Rose | | ------------------------------------- | AX.25 | ----------------------------------------------- This diagram simply illustrates that NetRom, Rose and TCP/IP all run directly on top of AX.25, but that each of these protocols is treated as a seperate protocol at the programming interface. The `AF_' names are simply the names given to the `Address Family' of each of these protocols when writing programs to use them. The important thing to note here is the implicit dependence on the configuration of your AX.25 devices before you can configure your NetRom, Rose or TCP/IP devices. Software module diagram of Linux Network Implementation ---------------------------------------------------------------------------- User | Programs | call node || Daemons | ax25d mheardd | | pms mheard || | inetd netromd ---------------------------------------------------------------------------- | Sockets | open(), close(), listen(), read(), write(), connect() | |------------------------------------------------------ | | AF_AX25 | AF_NETROM | AF_ROSE | AF_INET |------------------------------------------------------------------ Kernel | Protocols | AX.25 | NetRom | Rose | IP/TCP/UDP |------------------------------------------------------------------ | Devices | ax0,ax1 | nr0,nr1 | rose0,rose1 | eth0,ppp0 |------------------------------------------------------------------ | Drivers | Kiss PI2 PacketTwin SCC BPQ | slip ppp | | Soundmodem Baycom | ethernet ---------------------------------------------------------------------------- Hardware | PI2 Card, PacketTwin Card, SCC card, Serial port, Ethernet Card ---------------------------------------------------------------------------- This diagram is a little more general than the first. This diagram attempts to show the relationship between user applications, the ker- nel and the hardware. It also shows the relationship between the Socket application programming interface, the actual protocol modules, the kernel networking devices and the device drivers. Anything in this diagram is dependent on anything underneath it, and in general you must configure from the bottom of the diagram upwards. So for exam- ple, if you want to run the call program you must also configure the Hardware, then ensure that the kernel has the appropriate device driver, that you create the appropriate network device, that the ker- nel includes the desired protocol that presents a programming inter- face that the call program can use. I have attempted to lay out this document in roughly that order. 3. The AX.25/NetRom/Rose software components. The AX.25 software is comprised of three components, the kernel source, the network configuration tools and the utility programs. The version 2.0.xx Linux kernels include the AX.25, NetRom, Z8530 SCC, PI card and PacketTwin drivers by default. These have been significantly enhanced in the 2.1.* kernels. Unfortunately, the rest of the 2.1.* kernels makes them fairly unstable at the moment and not a good choice for a production system. To solve this problem Jonathon Naylor has prepared a patch kit which will bring the amateur radio protocol support in a 2.0.28 kernel up to the standard of the 2.1.* kernels. This is very simple to apply, and provides a range of facilities not present in the standard kernel such as Rose support. 3.1. Finding the kernel, tools and utility packages. 3.1.1. The kernel source: The kernel source can be found in its usual place at: ftp.kernel.org /pub/linux/kernel/v2.0/linux-2.0.31.tar.gz The current version of the AX25 upgrade patch is available at: ftp.pspt.fi /pub/linux/ham/ax25/ax25-module-14e.tar.gz 3.1.2. The network tools: The latest alpha release of the standard Linux network tools support AX.25 and NetRom and can be found at: ftp.inka.de /pub/comp/Linux/networking/net-tools/net-tools-1.33.tar.gz The latest ipfwadm package can be found at: ftp.xos.nl /pub/linux/ipfwadm/ 3.1.3. The AX25 utilities: There are two different families of AX25-utilities. One is for the 2.0.* kernels and the other will work with either the 2.1.* kernels or the 2.0.*+moduleXX kernels. The ax25-utils version number indicates the oldest version of kernel that they will work with. Please choose a version of the ax25-utils appropriate to your kernel. The following are working combinations. You must use one of the following combinations, any other combination will not work, or will not work well. Linux Kernel AX25 Utility set ---------------------- ------------------------- linux-2.0.29 ax25-utils-2.0.12c.tar.gz ** linux-2.0.28+module12 ax25-utils-2.1.22b.tar.gz ** linux-2.0.30+module14c ax25-utils-2.1.42a.tar.gz linux-2.0.31+module14d ax25-utils-2.1.42a.tar.gz linux-2.1.22 ++ ax25-utils-2.1.22b.tar.gz linux-2.1.42 ++ ax25-utils-2.1.42a.tar.gz Note: the ax25-utils-2.0.* series (marked above with the '**' symbol) is now obsolete and is no longer supported. This document covers configuration using the versions of software recommended above the table. While there are differences between the releases, most of the information will be relevant to earlier releases of code. The AX.25 utility programs can be found at: ftp.pspt.fi or at: sunsite.unc.edu 4. Installing the AX.25/NetRom/Rose software. To successfully install AX.25 support on your linux system you must configure and install an appropriate kernel and then install the AX.25 utilities. 4.1. Compiling the kernel. If you are already familiar with the process of compiling the Linux Kernel then you can skip this section, just be sure to select the appropriate options when compiling the kernel. If you are not, then read on. The normal place for the kernel source to be unpacked to is the /usr/src directory into a subdirectory called linux. To do this you should be logged in as root and execute a series of commands similar to the following: # mv linux linux.old # cd /usr/src # tar xvfz linux-2.0.31.tar.gz # tar xvfz /pub/net/ax25/ax25-module-14e.tar.gz # patch -p0 ... [*] Prompt for development and/or incomplete code/drivers ... General setup ---> ... [*] Networking support ... Networking options ---> ... [*] TCP/IP networking [?] IP: forwarding/gatewaying ... [?] IP: tunneling ... [?] IP: Allow large windows (not recommended if <16Mb of memory) ... [*] Amateur Radio AX.25 Level 2 [?] Amateur Radio NET/ROM [?] Amateur Radio X.25 PLP (Rose) ... Network device support ---> [*] Network device support ... [*] Radio network interfaces [?] BAYCOM ser12 and par96 driver for AX.25 [?] Soundcard modem driver for AX.25 [?] Soundmodem support for Soundblaster and compatible cards [?] Soundmodem support for WSS and Crystal cards [?] Soundmodem support for 1200 baud AFSK modulation [?] Soundmodem support for 4800 baud HAPN-1 modulation [?] Soundmodem support for 9600 baud FSK G3RUH modulation [?] Serial port KISS driver for AX.25 [?] BPQ Ethernet driver for AX.25 [?] Gracilis PackeTwin support for AX.25 [?] Ottawa PI and PI/2 support for AX.25 [?] Z8530 SCC KISS emulation driver for AX.25 ... The options I have flagged with a `*' are those that you must must answer `Y' to. The rest are dependent on what hardware you have and what other options you want to include. Some of these options are described in more detail later on, so if you don't know what you want yet, then read ahead and come back to this step later. After you have completed the kernel configuration you should be able to cleanly compile your new kernel: # make dep # make clean # make zImage maake sure you move your arch/i386/boot/zImage file wherever you want it and then edit your /etc/lilo.conf file and rerun lilo to ensure that you actually boot from it. 4.1.1. A word about Kernel modules I recommend that you don't compile any of the drivers as modules. In nearly all installations you gain nothing but additional complexity. Many people have problems trying to get the modularised components working, not because the software is faulty but because modules are more complicated to install and configure. If you've chosen to compile any of the components as modules, then you'll also need to use: # make modules # make modules_install to install your modules in the appropriate location. You will also need to add some entries into your /etc/conf.modules file that will ensure that the kerneld program knows how to handle the kernel modules. You should add/modify the following: alias net-pf-3 ax25 alias net-pf-6 netrom alias net-pf-11 rose alias tty-ldisc-1 slip alias tty-ldisc-3 ppp alias tty-ldisc-5 mkiss alias bc0 baycom alias nr0 netrom alias pi0a pi2 alias pt0a pt alias scc0 optoscc (or one of the other scc drivers) alias sm0 soundmodem alias tunl0 newtunnel alias char-major-4 serial alias char-major-5 serial alias char-major-6 lp 4.1.2. What's new in 2.0.*+ModuleXX or 2.1.* Kernels ? The 2.1.* kernels have enhanced versions of nearly all of the protocols and drivers. The most significant of the enhancements are: modularised the protocols and drivers have all been modularised so that you can insmod and rmmod them whenever you wish. This reduces the kernel memory requirements for infrequently used modules and makes development and bug hunting much simpler. That being said, it also makes configuration slightly more difficult. All drivers are now network drivers all of the network devices such as Baycom, SCC, PI, Packettwin etc now present a normal network interface, that is they now look like the ethernet driver does, they no longer look like KISS TNC's. A new utility called net2kiss allows you to build a kiss interface to these devices if you wish. bug fixed there have been many bug fixes and new features added to the drivers and protocols. The Rose protocol is one important addition. 4.2. The network configuration tools. Now that you have compiled the kernel you should compile the new network configuration tools. These tools allow you to modify the configuration of network devices and to add routes to the routing table. The new alpha release of the standard net-tools package includes support for AX.25 and NetRom support. I've tested this and it seems to work well for me. 4.2.1. A patch kit that adds Rose support and fixes some bugs. The standard net-tools-1.33.tar.gz package has some small bugs that affect the AX.25 and NetRom support. I've made a small patch kit that corrects these and adds Rose support to the tools as well. You can get the patch from: zone.pspt.fi . 4.2.2. Building the standard net-tools release. Don't forget to read the Release file and follow any instructions there. The steps I used to compile the tools were: # cd /usr/src # tar xvfz net-tools-1.33.tar.gz # zcat net-tools-1.33.rose.tjd.diff.gz | patch -p0 # cd net-tools-1.33 # make config At this stage you will be presented with a series of configuration questions, similar to the kernel configuration questions. Be sure to include support for all of the protocols and network devices types that you intend to use. If you do not know how to answer a particular question then answer `Y'. When the compilation is complete, you should use the: # make install command to install the programs in their proper place. If you wish to use the IP firewall facilities then you will need the latest firewall administration tool ipfwadm. This tool replaces the older ipfw tool which will not work with new kernels. I compiled the ipfwadm utility with the following commands: # cd /usr/src # tar xvfz ipfwadm-2.0beta2.tar.gz # cd ipfwadm-2.0beta2 # make install # cp ipfwadm.8 /usr/man/man8 # cp ipfw.4 /usr/man/man4 4.3. The AX.25 user and utility programs. After you have successfully compiled and booted your new kernel, you need to compile the user programs. To compile and install the user programs you should use a series of commands similar to the following: # cd /usr/src # tax xvfz ax25-utils-2.1.42a.tar.gz # cd ax25-utils-2.1.42a # make config # make # make install The files will be installed under the /usr directory by default in subdirectories: bin, sbin, etc and man. If this is a first time installation, that is you've never installed any ax25 utilities on your machine before you should also use the: # make installconf command to install some sample configuration files into the /etc/ax25/ directory from which to work. If you get messages something like: gcc -Wall -Wstrict-prototypes -O2 -I../lib -c call.c call.c: In function `statline': call.c:268: warning: implicit declaration of function `attron' call.c:268: `A_REVERSE' undeclared (first use this function) call.c:268: (Each undeclared identifier is reported only once call.c:268: for each function it appears in.) then you should double check that you have the ncurses package properly installed on your system. The configuration script attempts to locate your ncurses packages in the common locations, but some installations have ncurses badly installed and it is unable to locate them. 5. A note on callsigns, addresses and things before we start. Each AX.25 and NetRom port on your system must have a callsign/ssid allocated to it. These are configured in the configuration files that will be described in detail later on. Some AX.25 implementations such as NOS and BPQ will allow you to configure the same callsign/ssid on each AX.25 and NetRom port. For somewhat complicated technical reasons Linux does not allow this. This isn't as big a problem in practise as it might seem. This means that there are things you should be aware of and take into consideration when doing your configurations. 1. Each AX.25 and NetRom port must be configured with a unique callsign/ssid. 2. TCP/IP will use the callsign/ssid of the AX.25 port it is being transmitted or received by, ie the one you configured for the AX.25 interface in point 1. 3. NetRom will use the callsign/ssid specified for it in its configuration file, but this callsign is only used when your NetRom is speaking to another NetRom, this is not the callsign/ssid that AX.25 users who wish to use your NetRom `node' will use. More on this later. 4. Rose will, by default, use the callsign/ssid of the AX.25 port, unless the Rose callsign has been specifically set using the `rsparms' command. If you set a callsign/ssid using the `rsparms' command then Rose will use this callsign/ssid on all ports. 5. Other programs, such as the `ax25d' program can listen using any callsign/ssid that they wish and these may be duplicated across different ports. 6. If you are careful with routing you can configure the same IP address on all ports if you wish. 5.1. What are all those T1, T2, N2 and things ? Not every AX.25 implementation is a TNC2. Linux uses nomenclature that differs in some respects from that you will be used to if your sole experience with packet is a TNC. The following table should help you interpret what each of the configurable items are, so that when you come across them later in this text you'll understand what they mean. ------------------------------------------------------------------- Linux | TAPR TNC | Description ------------------------------------------------------------------- T1 | FRACK | How long to wait before retransmitting an | | unacknowledged frame. ------------------------------------------------------------------- T2 | RESPTIME | The minimum amount of time to wait for another | | frame to be received before transmitting | | an acknowledgement. ------------------------------------------------------------------- T3 | CHECK | The period of time we wait between sending | | a check that the link is still active. ------------------------------------------------------------------- N2 | RETRY | How many times to retransmit a frame before | | assuming the connection has failed. ------------------------------------------------------------------- Idle | | The period of time a connection can be idle | | before we close it down. ------------------------------------------------------------------- Window | MAXFRAME | The maximum number of unacknowledged | | transmitted frames. ------------------------------------------------------------------- 5.2. Run time configurable parameters The 2.1.* and 2.0.* +moduleXX kernels have a new feature that allows you to change many previously unchangable parameters at run time. If you take a careful look at the /proc/sys/net/ directory structure you will see many files with useful names that describe various parameters for the network configuration. The files in the /proc/sys/net/ax25/ directory each represents one configured AX.25 port. The name of the file relates to the name of the port. The structure of the files in /proc/sys/net/ax25// is as follows: FileName Meaning Values Default ip_default_mode IP Default Mode 0=DG 1=VC 0 ax25_default_mode AX.25 Default Mode 0=Normal 1=Extended 0 backoff_type Backoff 0=Linear 1=Exponential 1 connect_mode Connected Mode 0=No 1=Yes 1 standard_window_size Standard Window 1 <= N <= 7 2 extended_window_size Extended Window 1 <= N <= 63 32 t1_timeout T1 Timeout 1s <= N <= 30s 10s t2_timeout T2 Timeout 1s <= N <= 20s 3s t3_timeout T3 Timeout 0s <= N <= 3600s 300s idle_timeout Idle Timeout 0m <= N 20m maximum_retry_count N2 1 <= N <= 31 10 maximum_packet_length AX.25 Frame Length 1 <= N <= 512 256 In the table T1, T2 and T3 are given in seconds, and the Idle Timeout is given in minutes. But please note that the values used in the sysctl interface are given in internal units where the time in seconds is multiplied by 10, this allows resolution down to 1/10 of a second. With timers that are allowed to be zero, eg T3 and Idle, a zero value indicates that the timer is disabled. The structure of the files in /proc/sys/net/netrom/ is as follows: FileName Values Default default_path_quality 10 link_fails_count 2 network_ttl_initialiser 16 obsolescence_count_initialiser 6 routing_control 1 transport_acknowledge_delay 50 transport_busy_delay 1800 transport_maximum_tries 3 transport_requested_window_size 4 transport_timeout 1200 The structure of the files in /proc/sys/net/rose/ is as follows: FileName Values Default acknowledge_hold_back_timeout 50 call_request_timeout 2000 clear_request_timeout 1800 link_fail_timeout 1200 maximum_virtual_circuits 50 reset_request_timeout 1800 restart_request_timeout 1800 routing_control 1 window_size 3 To set a parameter all you need to do is write the desired value to the file itself, for example to check and set the Rose window size you'd use something like: # cat /proc/sys/net/rose/window_size 3 # echo 4 >/proc/sys/net/rose/window_size # cat /proc/sys/net/rose/window_size 4 6. Configuring an AX.25 port. Each of the AX.25 applications read a particular configuration file to obtain the parameters for the various AX.25 ports configured on your Linux machine. For AX.25 ports the file that is read is the /etc/ax25/axport file. You must have an entry in this file for each AX.25 port you want on your system. 6.1. Creating the AX.25 network device. The network device is what is listed when you use the `ifconfig' command. This is the object that the Linux kernel sends and receives network data from. Nearly always the network device has a physical port associated with it, but there are occasions where this isn't necessary. The network device does relate directly to a device driver. In the Linux AX.25 code there are a number of device drivers. The most common is probably the KISS driver, but others are the SCC driver(s), the Baycom driver and the SoundModem driver. Each of these device drivers will create a network device when it is started. 6.1.1. Creating a KISS device. Kernel Compile Options: General setup ---> [*] Networking support Network device support ---> [*] Network device support ... [*] Radio network interfaces [*] Serial port KISS driver for AX.25 Probably the most common configuration will be for a KISS TNC on a serial port. You will need to have the TNC preconfigured and connected to your serial port. You can use a communications program like minicom or seyon to configure the TNC into kiss mode. To create a KISS device you use the kissattach program. In it simplest form you can use the kissattach program as follows: # /usr/sbin/kissattach /dev/ttyS0 radio # kissparms -p radio -t 100 -s 100 -r 25 The kissattach command will create a KISS network device. These devices are called `ax[0-9]'. The first time you use the kissattach command it creates `ax0', the second time it creates `ax1' etc. Each KISS device has an associated serial port. The kissparms command allows you to set various KISS parameters on a KISS device. Specifically the example presented would create a KISS network device using the serial device `/dev/ttyS0' and the entry from the /etc/ax25/axports with a port name of `radio'. It then configures it with a txdelay and slottime of 100 milliseconds and a ppersist value of 25. Please refer to the man pages for more information. 6.1.1.1. Configuring for Dual Port TNC's The mkiss utility included in the ax25-utils distribution allows you to make use of both modems on a dual port TNC. Configuration is fairly simple. It works by taking a single serial device connected to a single multiport TNC and making it look like a number of devices each connected to a single port TNC. You do this before you do any of the AX.25 configuration. The devices that you then do the AX.25 configuration on are pseudo-TTY interfaces, (/dev/ttyq*), and not the actual serial device. Pseudo-TTY devices create a kind of pipe through which programs designed to talk to tty devices can talk to other programs designed to talk to tty devices. Each pipe has a master and a slave end. The master end is generally called `/dev/ptyq*' and the slave ends are called `/dev/ttyq*'. There is a one to one relationship between masters and slaves, so /dev/ptyq0 is the master end of a pipe with /dev/ttyq0 as its slave. You must open the master end of a pipe before opening the slave end. mkiss exploits this mechanism to split a single serial device into seperate devices. Example: if you have a dual port tnc and it is connected to your /dev/ttyS0 serial device at 9600 bps, the command: # /usr/sbin/mkiss -s 9600 /dev/ttyS0 /dev/ptyq0 /dev/ptyq1 # /usr/sbin/kissattach /dev/ttyq0 port1 # /usr/sbin/kissattach /dev/ttyq1 port2 would create two pseudo-tty devices that each look like a normal single port TNC. You would then treat /dev/ttyq0 and /dev/ttyq1 just as you would a conventional serial device with TNC connected. This means you'd then use the kissattach command as described above, on each of those, in the example for AX.25 ports called port1 and port2. You shouldn't use kissattach on the actual serial device as the mkiss program uses it. The mkiss command has a number of optional arguments that you may wish to use. They are summarised as follows: -c enables the addition of a one byte checksum to each KISS frame. This is not supported by most KISS implementation, it is supported by the G8BPG KISS rom. -s sets the speed of the serial port. -h enables hardware handshaking on the serial port, it is off by default. Most KISS implementation do not support this, but some do. -l enables logging of information to the syslog logfile. 6.1.2. Creating a Baycom device. Kernel Compile Options: Code maturity level options ---> [*] Prompt for development and/or incomplete code/drivers General setup ---> [*] Networking support Network device support ---> [*] Network device support ... [*] Radio network interfaces [*] BAYCOM ser12 and par96 driver for AX.25 Thomas Sailer, , despite the popularly held belief that it would not work very well, has developed Linux support for Baycom modems. His driver supports the Ser12 serial port, Par96 and the enhanced PicPar parallel port modems. Further information about the modems themselves may be obtained from the Baycom Web site . Your first step should be to determine the i/o and addresses of the serial or parallel port(s) you have Baycom modem(s) connected to. When you have these you must configure the Baycom driver with them. The BayCom driver creates network devices called: bc0, bc1, bc2 etc. when it is configured. The sethdlc utility allows you to configure the driver with these parameters, or, if you have only one Baycom modem installed you may specify the parameters on the insmod commmand line when you load the Baycom module. For example, a simple configuration. Disable the serial driver for COM1: then configure the Baycom driver for a Ser12 serial port modem on COM1: with the software DCD option enabled: # setserial /dev/ttyS0 uart none # insmod hdlcdrv # insmod baycom mode="ser12*" iobase=0x3f8 irq=4 Par96 parallel port type modem on LPT1: using hardware DCD detection: # insmod hdlcdrv # insmod baycom mode="par96" iobase=0x378 irq=7 options=0 This is not really the preferred way to do it. The sethdlc utility works just as easily with one device as with many. The sethdlc man page has the full details, but a couple of examples will illustrate the most important aspects of this configuration. The following examples assume you have already loaded the Baycom module using: # insmod hdlcdrv # insmod baycom or that you compiled the kernel with the driver inbuilt. Configure the bc0 device driver as a Parallel port Baycom modem on LPT1: with software DCD: # sethdlc -p -i bc0 mode par96 io 0x378 irq 7 Configure the bc1 device driver as a Serial port Baycom modem on COM1: # sethdlc -p -i bc1 mode "ser12*" io 0x3f8 irq 4 6.1.3. Configuring the AX.25 channel access parameters. The AX.25 channel access parameters are the equivalent of the KISS ppersist, txdelay and slottime type parameters. Again you use the sethdlc utility for this. Again the sethdlc man page is the source of the most complete information but another example of two won't hurt: Configure the bc0 device with TxDelay of 200 mS, SlotTime of 100 mS, PPersist of 40 and half duplex: # sethdlc -i bc0 -a txd 200 slot 100 ppersist 40 half Note that the timing values are in milliseconds. 6.1.3.1. Configuring the Kernel AX.25 to use the BayCom device The BayCom driver creates standard network devices that the AX.25 Kernel code can use. Configuration is much the same as that for a PI or PacketTwin card. The first step is to configure the device with an AX.25 callsign. The ifconfig utility may be used to perform this. # /sbin/ifconfig bc0 hw ax25 VK2KTJ-15 up will assign the BayCom device bc0 the AX.25 callsign VK2KTJ-15. Alternatively you can use the axparms command, you'll still need to use the ifconfig command to bring the device up though: # ifconfig bc0 up # axparms -setcall bc0 vk2ktj-15 The next step is to create an entry in the /etc/ax25/axports file as you would for any other device. The entry in the axports file is associated with the network device you've configured by the callsign you configure. The entry in the axports file that has the callsign that you configured the BayCom device with is the one that will be used to refer to it. You may then treat the new AX.25 device as you would any other. You can configure it for TCP/IP, add it to ax25d and run NetRom or Rose over it as you please. 6.1.4. Creating a SoundModem device. Kernel Compile Options: Code maturity level options ---> [*] Prompt for development and/or incomplete code/drivers General setup ---> [*] Networking support Network device support ---> [*] Network device support ... [*] Radio network interfaces [*] Soundcard modem driver for AX.25 [?] Soundmodem support for Soundblaster and compatible cards [?] Soundmodem support for WSS and Crystal cards [?] Soundmodem support for 1200 baud AFSK modulation [?] Soundmodem support for 4800 baud HAPN-1 modulation [?] Soundmodem support for 9600 baud FSK G3RUH modulation Thomas Sailer has built a new driver for the kernel that allows you to use your soundcard as a modem. Connect your radio directly to your soundcard to play packet! Thomas recommends at least a 486DX2/66 if you want to use this software as all of the digital signal processing is done by the main CPU. The driver currently emulates 1200 bps AFSK, 4800 HAPN and 9600 FSK (G3RUH compatible) modem types. The only sound cards currently supported are SoundBlaster and WindowsSoundSystem Compatible models. The sound cards require some circuitry to help them drive the Push-To- Talk circuitry, and information on this is available from Thomas's SoundModem PTT circuit web page . There are quite a few possible options, they are: detect the sound output from the soundcard, or use output from a parallel port, serial port or midi port. Circuit examples for each of these are on Thomas's site. The SoundModem driver creates network devices called: sm0, sm1, sm2 etc when it is configured. Note: the SoundModem driver competes for the same resources as the Linux sound driver. So if you wish to use the SoundModem driver you must ensure that the Linux sound driver is not installed. You can of course compile them both as modules and insert and remove them as you wish. 6.1.4.1. Configuring the sound card. The SoundModem driver does not initialise the sound card. The ax25-utils package includes a utility to do this called `setcrystal' that may be used for SoundCards based on the Crystal chipset. If you have some other card then you will have to use some other software to initialise it. Its syntax is fairly straightforward: setcrystal [-w wssio] [-s sbio] [-f synthio] [-i irq] [-d dma] [-c dma2] So, for example, if you wished to configure a soundblaster card at i/o base address 0x388, irq 10 and DMA 1 you would use: # setcrystal -s 0x388 -i 10 -d 1 To configure a WindowSoundSystem card at i/o base address 0x534, irq 5, DMA 3 you would use: # setcrystal -w 0x534 -i 5 -d 3 The [-f synthio] parameter is the set the synthesiser address, and the [-c dma2] parameter is to set the second DMA channel to allow full duplex operation. 6.1.4.2. Configuring the SoundModem driver. When you have configured the soundcard you need to configure the driver telling it where the sound card is located and what sort of modem you wish it to emulate. The sethdlc utility allows you to configure the driver with these parameters, or, if you have only one soundcard installed you may specify the parameters on the insmod commmand line when you load the SoundModem module. For example, a simple configuration, with one SoundBlaster soundcard configured as described above emulating a 1200 bps modem: # insmod hdlcdrv # insmod soundmodem mode="sbc:afsk1200" iobase=0x220 irq=5 dma=1 This is not really the preferred way to do it. The sethdlc utility works just as easily with one device as with many. The sethdlc man page has the full details, but a couple of examples will illustrate the most important aspects of this configuration. The following examples assume you have already loaded the SoundModem modules using: # insmod hdlcdrv # insmod soundmodem or that you compiled the kernel with the driver inbuilt. Configure the driver to support the WindowsSoundSystem card we configured above to emulate a G3RUH 9600 compatible modem as device sm0 using a parallel port at 0x378 to key the Push-To-Talk: # sethdlc -p -i sm0 mode wss:fsk9600 io 0x534 irq 5 dma 3 pario 0x378 Configure the driver to support the SoundBlaster card we configured above to emulate a 4800 bps HAPN modem as device sm1 using the serial port located at 0x2f8 to key the Push-To-Talk: # sethdlc -p -i sm1 mode sbc:hapn4800 io 0x388 irq 10 dma 1 serio 0x2f8 Configure the driver to support the SoundBlaster card we configured above to emulate a 1200 bps AFSK modem as device sm1 using the serial port located at 0x2f8 to key the Push-To-Talk: # sethdlc -p -i sm1 mode sbc:afsk1200 io 0x388 irq 10 dma 1 serio 0x2f8 6.1.4.3. Configuring the AX.25 channel access parameters. The AX.25 channel access parameters are the equivalent of the KISS ppersist, txdelay and slottime type parameters. You use the sethdlc utility for this as well. Again the sethdlc man page is the source of the most complete information but another example of two won't hurt: Configure the sm0 device with TxDelay of 100 mS, SlotTime of 50mS, PPersist of 128 and full duplex: # sethdlc -i sm0 -a txd 100 slot 50 ppersist 128 full Note that the timing values are in milliseconds. 6.1.4.4. Setting the audio levels and tuning the driver. It is very important that the audio levels be set correctly for any radio based modem to work. This is equally true of the SoundModem. Thomas has developed some utility programs that make this task easier. They are called smdiag and smmixer. smdiag provides two types of display, either an oscilloscope type display or an eye pattern type display. smmixer allows you to actually adjust the transmit and receive audio levels. To start the smdiag utility in 'eye' mode for the SoundModem device sm0 you would use: # smdiag -i sm0 -e To start the smmixer utility for the SoundModem device sm0 you would use: # smmixer -i sm0 6.1.4.5. Configuring the Kernel AX.25 to use the SoundModem The SoundModem driver creates standard network devices that the AX.25 Kernel code can use. Configuration is much the same as that for a PI or PacketTwin card. The first step is to configure the device with an AX.25 callsign. The ifconfig utility may be used to perform this. # /sbin/ifconfig sm0 hw ax25 VK2KTJ-15 up will assign the SoundModem device sm0 the AX.25 callsign VK2KTJ-15. Alternatively you can use the axparms command, but you still need the ifconfig utility to bring the device up: # ifconfig sm0 up # axparms -setcall sm0 vk2ktj-15 The next step is to create an entry in the /etc/ax25/axports file as you would for any other device. The entry in the axports file is associated with the network device you've configured by the callsign you configure. The entry in the axports file that has the callsign that you configured the SoundModem device with is the one that will be used to refer to it. You may then treat the new AX.25 device as you would any other. You can configure it for TCP/IP, add it to ax25d and run NetRom or Rose over it as you please. 6.1.5. Creating a PI card device. Kernel Compile Options: General setup ---> [*] Networking support Network device support ---> [*] Network device support ... [*] Radio network interfaces [*] Ottawa PI and PI/2 support for AX.25 The PI card device driver creates devices named `pi[0-9][ab]'. The first PI card detected will be allocated `pi0', the second `pi1' etc. The `a' and `b' refer to the first and second physical interface on the PI card. If you have built your kernel to include the PI card driver, and the card has been properly detected then you can use the following command to configure the network device: # /sbin/ifconfig pi0a hw ax25 VK2KTJ-15 up This command would configure the first port on the first PI card detected with the callsign VK2KTJ-15 and make it active. To use the device all you now need to do is to configure an entry into your /etc/ax25/axports file with a matching callsign/ssid and you will be ready to continue on. The PI card driver was written by David Perry, 6.1.6. Creating a PacketTwin device. Kernel Compile Options: General setup ---> [*] Networking support Network device support ---> [*] Network device support ... [*] Radio network interfaces [*] Gracilis PackeTwin support for AX.25 The PacketTwin card device driver creates devices named `pt[0-9][ab]'. The first PacketTwin card detected will be allocated `pt0', the second `pt1' etc. The `a' and `b' refer to the first and second physical interface on the PacketTwin card. If you have built your kernel to include the PacketTwin card driver, and the card has been properly detected then you can use the following command to configure the network device: # /sbin/ifconfig pt0a hw ax25 VK2KTJ-15 up This command would configure the first port on the first PacketTwin card detected with the callsign VK2KTJ-15 and make it active. To use the device all you now need to do is to configure an entry into your /etc/ax25/axports file with a matching callsign/ssid and you will be ready to continue on. The PacketTwin card driver was written by Craig Small VK2XLZ, . 6.1.7. Creating a generic SCC device. Kernel Compile Options: General setup ---> [*] Networking support Network device support ---> [*] Network device support ... [*] Radio network interfaces [*] Z8530 SCC KISS emulation driver for AX.25 Joerg Reuter, DL1BKE, jreuter@poboxes.com has developed generic support for Z8530 SCC based cards. His driver is configurable to support a range of different types of cards and present an interface that looks like a KISS TNC so you can treat it as though it were a KISS TNC. 6.1.7.1. Obtaining and building the configuration tool package. While the kernel driver is included in the standard kernel distribution, Joerg distributes more recent versions of his driver with the suite of configuration tools that you will need to obtain as well. You can obtain the configuration tools package from: Joerg's web page or: db0bm.automation.fh-aachen.de /incoming/dl1bke/ or: insl1.etec.uni-karlsruhe.de /pub/hamradio/linux/z8530/ or: ftp.ucsd.edu /hamradio/packet/tcpip/linux /hamradio/packet/tcpip/incoming/ You will find multiple versions, choose the one that best suits the kernel you intend to use: z8530drv-2.4a.dl1bke.tar.gz 2.0.* z8530drv-utils-3.0.tar.gz 2.1.6 or greater The following commands were what I used to compile and install the package for kernel version 2.0.30: # cd /usr/src # gzip -dc z8530drv-2.4a.dl1bke.tar.gz | tar xvpofz - # cd z8530drv # make clean # make dep # make module # If you want to build the driver as a module # make for_kernel # If you want the driver to built into your kernel # make install After the above is complete you should have three new programs installed in your /sbin directory: gencfg, sccinit and sccstat. It is these programs that you will use to configure the driver for your card. You will also have a group of new special device files created in your /dev called scc0-scc7. These will be used later and will be the `KISS' devices you will end up using. If you chose to 'make for_kernel' then you will need to recompile your kernel. To ensure that you include support for the z8530 driver you must be sure to answer `Y' to: `Z8530 SCC kiss emulation driver for AX.25' when asked during a kernel `make config'. If you chose to 'make module' then the new scc.o will have been installed in the appropriate /lib/modules directory and you do not need to recompile your kernel. Remember to use the insmod command to load the module before your try and configure it. 6.1.7.2. Configuring the driver for your card. The z8530 SCC driver has been designed to be as flexible as possible so as to support as many different types of cards as possible. With this flexibility has come some cost in configuration. There is more comprehensive documentation in the package and you should read this if you have any problems. You should particularly look at doc/scc_eng.doc or doc/scc_ger.doc for more detailed information. I've paraphrased the important details, but as a result there is a lot of lower level detail that I have not included. The main configuration file is read by the sccinit program and is called /etc/z8530drv.conf. This file is broken into two main stages: Configuration of the hardware parameters and channel configuration. After you have configured this file you need only add: # sccinit into the rc file that configures your network and the driver will be initialised according to the contents of the configuration file. You must do this before you attempt to use the driver. 6.1.7.2.1. Configuration of the hardware parameters. The first section is broken into stanzas, each stanza representing an 8530 chip. Each stanza is a list of keywords with arguments. You may specify up to four SCC chips in this file by default. The #define MAXSCC 4 in scc.c can be increased if you require support for more. The allowable keywords and arguments are: chip the chip keyword is used to separate stanzas. It will take anything as an argument. The arguments are not used. data_a this keyword is used to specify the address of the data port for the z8530 channel `A'. The argument is a hexadecimal number e.g. 0x300 ctrl_a this keyword is used to specify the address of the control port for the z8530 channel `A'. The arguments is a hexadecimal number e.g. 0x304 data_b this keyword is used to specify the address of the data port for the z8530 channel `B'. The argument is a hexadecimal number e.g. 0x301 ctrl_b this keyword is used to specify the address of the control port for the z8530 channel `B'. The arguments is a hexadecimal number e.g. 0x305 irq this keyword is used to specify the IRQ used by the 8530 SCC described in this stanza. The argument is an integer e.g. 5 pclock this keyword is used to specify the frequency of the clock at the PCLK pin of the 8530. The argument is an integer frequency in Hz which defaults to 4915200 if the keyword is not supplied. board the type of board supporting this 8530 SCC. The argument is a character string. The allowed values are: PA0HZP the PA0HZP SCC Card EAGLE the Eagle card PC100 the DRSI PC100 SCC card PRIMUS the PRIMUS-PC (DG9BL) card BAYCOM BayCom (U)SCC card escc this keyword is optional and is used to enable support for the Extended SCC chips (ESCC) such as the 8580, 85180, or the 85280. The argument is a character string with allowed values of `yes' or `no'. The default is `no'. vector this keyword is optional and specifies the address of the vector latch (also known as "intack port") for PA0HZP cards. There can be only one vector latch for all chips. The default is 0. special this keyword is optional and specifies the address of the special function register on several cards. The default is 0. option this keyword is optional and defaults to 0. Some example configurations for the more popular cards are as follows: BayCom USCC chip 1 data_a 0x300 ctrl_a 0x304 data_b 0x301 ctrl_b 0x305 irq 5 board BAYCOM # # SCC chip 2 # chip 2 data_a 0x302 ctrl_a 0x306 data_b 0x303 ctrl_b 0x307 board BAYCOM PA0HZP SCC card chip 1 data_a 0x153 data_b 0x151 ctrl_a 0x152 ctrl_b 0x150 irq 9 pclock 4915200 board PA0HZP vector 0x168 escc no # # # chip 2 data_a 0x157 data_b 0x155 ctrl_a 0x156 ctrl_b 0x154 irq 9 pclock 4915200 board PA0HZP vector 0x168 escc no DRSI SCC card chip 1 data_a 0x303 data_b 0x301 ctrl_a 0x302 ctrl_b 0x300 irq 7 pclock 4915200 board DRSI escc no If you already have a working configuration for your card under NOS, then you can use the gencfg command to convert the PE1CHL NOS driver commands into a form suitable for use in the z8530 driver configuration file. To use gencfg you simply invoke it with the same parameters as you used for the PE1CHL driver in NET/NOS. For example: # gencfg 2 0x150 4 2 0 1 0x168 9 4915200 will generate a skeleton configuration for the OptoSCC card. 6.1.7.3. Channel Configuration The Channel Configuration section is where you specify all of the other parameters associated with the port you are configuring. Again this section is broken into stanzas. One stanza represents one logical port, and therefore there would be two of these for each one of the hardware parameters stanzas as each 8530 SCC supports two ports. These keywords and arguments are also written to the /etc/z8530drv.conf file and must appear after the hardware parameters section. Sequence is very important in this section, but if you stick with the suggested sequence it should work ok. The keywords and arguments are: device this keyword must be the first line of a port definition and specifies the name of the special device file that the rest of the configuration applies to. e.g. /dev/scc0 speed this keyword specifies the speed in bits per second of the interface. The argument is an integer: e.g. 1200 clock this keyword specifies where the clock for the data will be sourced. Allowable values are: dpll normal halfduplex operation external MODEM supplies its own Rx/Tx clock divider use fullduplex divider if installed. mode this keyword specifies the data coding to be used. Allowable arguments are: nrzi or nrz rxbuffers this keyword specifies the number of receive buffers to allocate memory for. The argument is an integer, e.g. 8. txbuffers this keyword specifies the number of transmit buffers to allocate memory for. The argument is an integer, e.g. 8. bufsize this keyword specifies the size of the receive and transmit buffers. The arguments is in bytes and represents the total length of the frame, so it must also take into account the AX.25 headers and not just the length of the data field. This keyword is optional and default to 384 txdelay the KISS transmit delay value, the argument is an integer in mS. persist the KISS persist value, the argument is an integer. slot the KISS slot time value, the argument is an integer in mS. tail the KISS transmit tail value, the argument is an integer in mS. fulldup the KISS full duplex flag, the argument is an integer. 1==Full Duplex, 0==Half Duplex. wait the KISS wait value, the argument is an integer in mS. min the KISS min value, the argument is an integer in S. maxkey the KISS maximum keyup time, the argument is an integer in S. idle the KISS idle timer value, the argument is an integer in S. maxdef the KISS maxdef value, the argument is an integer. group the KISS group value, the argument is an integer. txoff the KISS txoff value, the argument is an integer in mS. softdcd the KISS softdcd value, the argument is an integer. slip the KISS slip flag, the argument is an integer. 6.1.7.4. Using the driver. To use the driver you simply treat the /dev/scc* devices just as you would a serial tty device with a KISS TNC connected to it. For example, to configure Linux Kernel networking to use your SCC card you could use something like: # kissattach -s 4800 /dev/scc0 VK2KTJ You can also use NOS to attach to it in precisely the same way. From JNOS for example you would use something like: attach asy scc0 0 ax25 scc0 256 256 4800 6.1.7.5. The sccstat and sccparam tools. To assist in the diagnosis of problems you can use the sccstat program to display the current configuration of an SCC device. To use it try: # sccstat /dev/scc0 you will displayed a very large amount of information relating to the configuration and health of the /dev/scc0 SCC port. The sccparam command allows you to change or modify a configuration after you have booted. Its syntax is very similar to the NOS param command, so to set the txtail setting of a device to 100mS you would use: # sccparam /dev/scc0 txtail 0x8 6.1.8. Creating a BPQ ethernet device. Kernel Compile Options: General setup ---> [*] Networking support Network device support ---> [*] Network device support ... [*] Radio network interfaces [*] BPQ Ethernet driver for AX.25 Linux supports BPQ Ethernet compatibility. This enables you to run the AX.25 protocol over your Ethernet LAN and to interwork your linux machine with other BPQ machines on the LAN. The BPQ network devices are named `bpq[0-9]'. The `bpq0' device is associated with the `eth0' device, the `bpq1' device with the `eth1' device etc. Configuration is quite straightforward. You firstly must have configured a standard Ethernet device. This means you will have compiled your kernel to support your Ethernet card and tested that this works. Refer to the Ethernet-HOWTO for more information on how to do this. To configure the BPQ support you need to configure the Ethernet device with an AX.25 callsign. The following command will do this for you: # /sbin/ifconfig bpq0 hw ax25 vk2ktj-14 up Again, remember that the callsign you specify should match the entry in the /etc/ax25/axports file that you wish to use for this port. 6.1.9. Configuring the BPQ Node to talk to the Linux AX.25 support. BPQ Ethernet normally uses a multicast address. The Linux implementation does not, and instead it uses the normal Ethernet broadcast address. The NET.CFG file for the BPQ ODI driver should therefore be modifified to look similar to this: LINK SUPPORT MAX STACKS 1 MAX BOARDS 1 LINK DRIVER E2000 ; or other MLID to suit your card INT 10 ; PORT 300 ; to suit your card FRAME ETHERNET_II PROTOCOL BPQ 8FF ETHERNET_II ; required for BPQ - can change PID BPQPARAMS ; optional - only needed if you want ; to override the default target addr ETH_ADDR FF:FF:FF:FF:FF:FF ; Target address 6.2. Creating the /etc/ax25/axports file. The /etc/ax25/axports is a simple text file that you create with a text editor. The format of the /etc/ax25/axports file is as follows: portname callsign baudrate paclen window description where: portname is a text name that you will refer to the port by. callsign is the AX.25 callsign you want to assign to the port. baudrate is the speed at which you wish the port to communicate with your TNC. paclen is the maximum packet length you want to configure the port to use for AX.25 connected mode connections. window is the AX.25 window (K) parameter. This is the same as the MAXFRAME setting of many tnc's. description is a textual description of the port. In my case, mine looks like: radio VK2KTJ-15 4800 256 2 4800bps 144.800 MHz ether VK2KTJ-14 10000000 256 2 BPQ/ethernet device Remember, you must assign unique callsign/ssid to each AX.25 port you create. Create one entry for each AX.25 device you want to use, this includes KISS, Baycom, SCC, PI, PT and SoundModem ports. Each entry here will describe exactly one AX.25 network device. The entries in this file are associated with the network devices by the callsign/ssid. This is at least one good reason for requiring unique callsign/ssid. 6.3. Configuring AX.25 routing. You may wish to configure default digipeaters paths for specific hosts. This is useful for both normal AX.25 connections and also IP based connections. The axparms command enables you to do this. Again, the man page offers a complete description, but a simple example might be: # /usr/sbin/axparms -route add radio VK2XLZ VK2SUT This command would set a digipeater entry for VK2XLZ via VK2SUT on the AX.25 port named radio. 7. Configuring an AX.25 interface for TCP/IP. It is very simple to configure an AX.25 port to carry TCP/IP. If you have KISS interfaces then there are two methods for configuring an IP address. The kissattach command has an option that allows you to do specify an IP address. The more conventional method using the ifconfig command will work on all interface types. So, modifying the previous KISS example: # /usr/sbin/kissattach -i 44.136.8.5 -m 512 /dev/ttyS0 radio # /sbin/route add -net 44.136.8.0 netmask 255.255.255.0 ax0 # /sbin/route add default ax0 to create the AX.25 interface with an IP address of 44.136.8.5 and an MTU of 512 bytes. You should still use the ifconfig to configure the other parameters if necessary. If you have any other interface type then you use the ifconfig program to configure the ip address and netmask details for the port and add a route via the port, just as you would for any other TCP/IP interface. The following example is for a PI card device, but would work equally well for any other AX.25 network device: # /sbin/ifconfig pi0a 44.136.8.5 netmask 255.255.255.0 up # /sbin/ifconfig pi0a broadcast 44.136.8.255 mtu 512 # /sbin/route add -net 44.136.8.0 netmask 255.255.255.0 pi0a # /sbin/route add default pi0a The commands listed above are typical of the sort of configuration many of you would be familiar with if you have used NOS or any of its derivatives or any other TCP/IP software. Note that the default route might not be required in your configuration if you have some other network device configured. To test it out, try a ping or a telnet to a local host. # ping -i 5 44.136.8.58 Note the use of the `-i 5' arguments to ping to tell it to send pings every 5 seconds instead of its default of 1 second. 8. Configuring a NetRom port. The NetRom protocol relies on, and uses the AX.25 ports you have created. The NetRom protocol rides on top of the AX.25 protocol. To configure NetRom on an AX.25 interface you must configure two files. One file describes the Netrom interfaces, and the other file describes which of the AX.25 ports will carry NetRom. You can configure multiple NetRom ports, each with its own callsign and alias, the same procedure applies for each. 8.1. Configuring /etc/ax25/nrports The first is the /etc/ax25/nrports file. This file describes the NetRom ports in much the same way as the /etc/ax25/axports file describes the AX.25 ports. Each NetRom device you wish to create must have an entry in the /etc/ax25/nrports file. Normally a Linux machine would have only one NetRom device configured that would use a number of the AX.25 ports defined. In some situations you might wish a special service such as a BBS to have a seperate NetRom alias and so you would create more than one. This file is formatted as follows: name callsign alias paclen description Where: name is the text name that you wish to refer to the port by. callsign is the callsign that the NetRom traffic from this port will use. Note, this is not that address that users should connect to to get access to a node style interface. (The node program is covered later). This callsign/ssid should be unique and should not appear elsewhere in either of the /etc/ax25/axports or the /etc/ax25/nrports files. alias is the NetRom alias this port will have assigned to it. paclen is the maximum size of NetRom frames transmitted by this port. description is a free text description of the port. An example would look something like the following: netrom VK2KTJ-9 LINUX 236 Linux Switch Port This example creates a NetRom port known to the rest of the NetRom network as `LINUX:VK2KTJ-9'. This file is used by programs such as the call program. 8.2. Configuring /etc/ax25/nrbroadcast The second file is the /etc/ax25/nrbroadcast file. This file may contain a number of entries. There would normally be one entry for each AX.25 port that you wish to allow NetRom traffic on. This file is formatted as follows: axport min_obs def_qual worst_qual verbose Where: axport is the port name obtained from the /etc/ax25/axports file. If you do not have an entry in /etc/ax25/nrbroadcasts for a port then this means that no NetRom routing will occur and any received NetRom broadcasts will be ignored for that port. min_obs is the minimum obselesence value for the port. def_qual is the default quality for the port. worst_qual is the worst quality value for the port, any routes under this quality will be ignored. verbose is a flag determining whether full NetRom routing broadcasts will occur from this port or only a routing broadcast advertising the node itself. An example would look something like the following: radio 1 200 100 1 8.3. Creating the NetRom Network device When you have the two configuration files completed you must create the NetRom device in much the same way as you did for the AX.25 devices. This time you use the nrattach command. The nrattach works in just the same way as the axattach command except that it creates NetRom network devices called `nr[0-9]'. Again, the first time you use the nrattach command it creates the `nr0' device, the second time it creates the `nr1' network devices etc. To create the network device for the NetRom port we've defined we would use: # nrattach netrom This command would start the NetRom device (nr0) named netrom configured with the details specified in the /etc/ax25/nrports file. 8.4. Starting the NetRom daemon The Linux kernel does all of the NetRom protocol and switching, but does not manage some functions. The NetRom daemon manages the NetRom routing tables and generates the NetRom routing broadcasts. You start NetRom daemon with the command: # /usr/sbin/netromd -i You should soon see the /proc/net/nr_neigh file filling up with information about your NetRom neighbours. Remember to put the /usr/sbin/netromd command in your rc files so that it is started automatically each time you reboot. 8.5. Configuring NetRom routing. You may wish to configure static NetRom routes for specific hosts. The nrparms command enables you to do this. Again, the man page offers a complete description, but a simple example might be: # /usr/sbin/nrparms -nodes VK2XLZ-10 + #MINTO 120 5 radio VK2SUT-9 This command would set a NetRom route to #MINTO:VK2XLZ-10 via a neighbour VK2SUT-9 on my AX.25 port called `radio'. You can manually create entries for new neighbours using the nrparms command as well. For example: # /usr/sbin/nrparms -routes radio VK2SUT-9 + 120 This command would create VK2SUT-9 as a NetRom neighbour with a quality of 120 and this will be locked and will not be deleted automatically. 9. Configuring a NetRom interface for TCP/IP. Configuring a NetRom interface for TCP/IP is almost identical to configuring an AX.25 interface for TCP/IP. Again you can either specify the ip address and mtu on the nrattach command line, or use the ifconfig and route commands, but you need to manually add arp entries for hosts you wish to route to because there is no mechanism available for your machine to learn what NetRom address it should use to reach a particular IP host. So, to create an nr0 device with an IP address of 44.136.8.5, an mtu of 512 and configured with the details from the /etc/ax25/nrports file for a NetRom port named netrom you would use: # /usr/sbin/nrattach -i 44.136.8.5 -m 512 netrom # route add 44.136.8.5 nr0 or you could use something like the following commands manually: # /usr/sbin/nrattach netrom # ifconfig nr0 44.136.8.5 netmask 255.255.255.0 hw netrom VK2KTJ-9 # route add 44.136.8.5 nr0 Then for each IP host you wish to reach via NetRom you need to set route and arp entries. To reach a destination host with an IP address of 44.136.80.4 at NetRom address BBS:VK3BBS via a NetRom neighbour with callsign VK2SUT-0 you would use commands as follows: # route add 44.136.80.4 nr0 # arp -t netrom -s 44.136.80.4 vk2sut-0 # nrparms -nodes vk3bbs + BBS 120 6 sl0 vk2sut-0 The `120' and `6' arguments to the nrparms command are the NetRom quality and obsolescence count values for the route. 10. Configuring a Rose port. The Rose packet layer protocol is similar to layer three of the X.25 specification. The kernel based Rose support is a modified version of the FPAC Rose implementation . The Rose packet layer protocol protocol relies on, and uses the AX.25 ports you have created. The Rose protocol rides on top of the AX.25 protocol. To configure Rose you must create a configuration file that describes the Rose ports you want. You can create multiple Rose ports if you wish, the same procedure applies for each. 10.1. Configuring /etc/ax25/rsports The file where you configure your Rose interfaces is the /etc/ax25/rsports file. This file describes the Rose port in much the same way as the /etc/ax25/axports file describes the AX.25 ports. This file is formatted as follows: name addresss description Where: name is the text name that you wish to refer to the port by. address is the 10 digit Rose address you wish to assign to this port. description is a free text description of the port. An example would look something like the following: rose 5050294760 Rose Port Note that Rose will use the default callsign/ssid configured on each AX.25 port unless you specify otherwise. To configure a seperate callsign/ssid for Rose to use on each port you use the rsparms command as follows: # /usr/sbin/rsprams -call VK2KTJ-10 This example would make Linux listen for and use the callsign/ssid VK2KTJ-10 on all of the configured AX.25 ports for Rose calls. 10.2. Creating the Rose Network device. When you have created the /etc/ax25/rsports file you may create the Rose device in much the same way as you did for the AX.25 devices. This time you use the rsattach command. The rsattach command creates network devices named `rose[0-5]'. The first time you use the rsattach command it create the `rose0' device, the second time it creates the `rose1' device etc. For example: # rsattach rose This command would start the Rose device (rose0) configured with the details specified in the /etc/ax25/rsports file for the entry named `rose'. 10.3. Configuring Rose Routing The Rose protocol currently supports only static routing. The rsparms utility allows you to configure your Rose routing table under Linux. For example: # rsparms -nodes add 5050295502 radio vk2xlz would add a route to Rose node 5050295502 via an AX.25 port named `radio' in your /etc/ax25/axports file to a neighbour with the call- sign VK2XLZ. You may specify a route with a mask to capture a number of Rose destinations into a single routing entry. The syntax looks like: # rsparms -nodes add 5050295502/4 radio vk2xlz which would be identical to the previous example except that it would match any destination address that matched the first four digits sup- plied, in this case any address commencing with the digits 5050. An alternate form for this command is: # rsparms -nodes add 5050/4 radio vk2xlz which is probably the less ambiguous form. 11. Making AX.25/NetRom/Rose calls. Now that you have all of your AX.25, NetRom and Rose interfaces configured and active, you should be able to make test calls. The AX25 Utilities package includes a program called `call' which is a splitscreen terminal program for AX.25, NetRom and Rose. A simple AX.25 call would look like: /usr/bin/call radio VK2DAY via VK2SUT A simple NetRom call to a node with an alias of SUNBBS would look like: /usr/bin/call netrom SUNBBS A simple Rose call to HEARD at node 5050882960 would look like: /usr/bin/call rose HEARD 5050882960 Note: you must tell call which port you wish to make the call on, as the same destination node might be reachable on any of the ports you have configured. The call program is a linemode terminal program for making AX.25 calls. It recognises lines that start with `~' as command lines. The `~.' command will close the connection. Please refer to the man page in /usr/man for more information. 12. Configuring Linux to accept Packet connections. Linux is a powerful operating system and offers a great deal of flexibility in how it is configured. With this flexibility comes a cost in configuring it to do what you want. When configuring your Linux machine to accept incoming AX.25, NetRom or Rose connections there are a number of questions you need to ask yourself. The most important of which is: "What do I want users to see when they connect?". People are developing neat little applications that may be used to provide services to callers, a simple example is the pms program included in the AX25 utilities, a more complex example is the node program also included in the AX25 utilities. Alternatively you might want to give users a login prompt so that they can make use of a shell account, or you might even have written your own program, such as a customised database or a game, that you want people to connect to. Whatever you choose, you must tell the AX.25 software about this so that it knows what software to run when it accepts an incoming AX.25 connection. The ax25d program is similar to the inetd program commonly used to accept incoming TCP/IP connections on unix machines. It sits and listens for incoming connections, when it detects one it goes away and checks a configuration file to determine what program to run and connect to that connection. Since this the standard tool for accepting incoming AX.25, NetRom and Rose connections I'll describe how to configure it. 12.1. Creating the /etc/ax25/ax25d.conf file. This file is the configuration file for the ax25d AX.25 daemon which handles incoming AX.25, NetRom and Rose connections. The file is a little cryptic looking at first, but you'll soon discover it is very simple in practice, with a small trap for you to be wary of. The general format of the ax25d.conf file is as follows: # This is a comment and is ignored by the ax25d program. [port_name] || || {port_name} window T1 T2 T3 idle N2 window T1 T2 T3 idle N2 parameters window T1 T2 T3 idle N2 window T1 T2 T3 idle N2 ... default window T1 T2 T3 idle N2 Where: # at the start of a line marks a comment and is completely ignored by the ax25d program. is the name of the AX.25, NetRom or Rose port as specified in the /etc/ax25/axports, /etc/ax25/nrports and /etc/ax25/rsports files. The name of the port is surrounded by the `[]' brackets if it is an AX.25 port, the `<>' brackets if it is a NetRom port, or the `{}' brackets if it is a Rose port. There is an alternate form for this field, and that is use prefix the port name with `callsign/ssid via' to indicate that you wish accept calls to the callsign/ssid via this interface. The example should more clearly illustrate this. is the callsign of the peer node that this particular configuration applies to. If you don't specify an SSID here then any SSID will match. window is the AX.25 Window parameter (K) or MAXFRAME parameter for this configuration. T1 is the Frame retransmission (T1) timer in half second units. T2 is the amount of time the AX.25 software will wait for another incoming frame before preparing a response in 1 second units. T3 is the amount of time of inactivity before the AX.25 software will disconnect the session in 1 second units. idle is the idle timer value in seconds. N2 is the number of consecutive retransmissions that will occur before the connection is closed. provides a mechanism for determining certain types of general permissions. The modes are enabled or disabled by supplying a combination of characters, each representing a permission. The characters may be in either upper or lower case and must be in a single block with no spaces. u/U UTMP - currently unsupported. v/V Validate call - currently unsupported. q/Q Quiet - Don't log connection n/N check NetRom Neighbour - currently unsupported. d/D Disallow Digipeaters - Connections must be direct, not digipeated. l/L Lockout - Don't allow connection. */0 marker - place marker, no mode set. is the userid that the program to be run to support the connection should be run as. is the full pathname of the command to be run, with no arguments specified. is the text that should appear in a ps as the command name running (normally the same as except without the directory path information. are the command line argument to be passed to the <:cmd> when it is run. You pass useful information into these arguments by use of the following tokens: %d Name of the port the connection was received on. %U AX.25 callsign of the connected party without the SSID, in uppercase. %u AX.25 callsign of the connected party without the SSID, in lowercase. %S AX.25 callsign of the connected party with the SSID, in uppercase. %s AX.25 callsign of the connected party with the SSID, in lowercase. %P AX.25 callsign of the remote node that the connection came in from without the SSID, in uppercase. %p AX.25 callsign of the remote node that the connection came in from without the SSID, in lowercase. %R AX.25 callsign of the remote node that the connection came in from with the SSID, in uppercase. %r AX.25 callsign of the remote node that the connection came in from with the SSID, in lowercase. You need one section in the above format for each AX.25, NetRom or Rose interface you want to accept incoming AX.25, NetRom or Rose connections on. There are two special lines in the paragraph, one starts with the string `parameters' and the other starts with the string `default' (yes there is a difference). These lines serve special functions. The `default' lines purpose should be obvious, this line acts as a catch-all, so that any incoming connection on the interface that doesn't have a specific rule will match the `default' rule. If you don't have a `default' rule, then any connections not matching any specific rule will be disconnected immediately without notice. The `parameters' line is a little more subtle, and here is the trap I mentioned earlier. In any of the fields for any definition for a peer you can use the `*' character to say `use the default value'. The `parameters' line is what sets those default values. The kernel software itself has some defaults which will be used if you don't specify any using the `parameters' entry. The trap is that the these defaults apply only to those rules below the `parameters' line, not to those above. You may have more than one `parameters' rule per interface definition, and in this way you may create groups of default configurations. It is important to note that the `parameters' rule does not allow you to set the `uid' or `command' fields. 12.2. A simple example ax25d.conf file. Ok, an illustrative example: # ax25d.conf for VK2KTJ - 02/03/97 # This configuration uses the AX.25 port defined earlier. # Win T1 T2 T3 idl N2 [] [VK2KTJ-0 via radio] parameters 1 10 * * * * * VK2XLZ * * * * * * * root /usr/sbin/axspawn axspawn %u + VK2DAY * * * * * * * root /usr/sbin/axspawn axspawn %u + NOCALL * * * * * * L default 1 10 5 100 180 5 * root /usr/sbin/pms pms -a -o vk2ktj [VK2KTJ-1 via radio] default * * * * * 0 root /usr/sbin/node node parameters 1 10 * * * * * NOCALL * * * * * * L default * * * * * * 0 root /usr/sbin/node node {VK2KTJ-0 via rose} parameters 1 10 * * * * * VK2XLZ * * * * * * * root /usr/sbin/axspawn axspawn %u + VK2DAY * * * * * * * root /usr/sbin/axspawn axspawn %u + NOCALL * * * * * * L default 1 10 5 100 180 5 * root /usr/sbin/pms pms -a -o vk2ktj {VK2KTJ-1 via rose} default * * * * * 0 root /usr/sbin/node node radio This example says that anybody attempting to connect to the callsign `VK2KTJ-0' heard on the AX.25 port called `radio' will have the following rules applied: Anyone whose callsign is set to `NOCALL' should be locked out, note the use of mode `L'. The parameters line changes two parameters from the kernel defaults (Window and T1) and will run the /usr/sbin/axspawn program for them. Any copies of /usr/sbin/axspawn run this way will appear as axspawn in a ps listing for convenience. The next two lines provide definitions for two stations who will receive those permissions. The last line in the paragraph is the `catch all' definition that everybody else will get (including VK2XLZ and VK2DAY using any other SSID other than -1). This definition sets all of the parameters implicitly and will cause the pms program to be run with a command line argument indicating that it is being run for an AX.25 connection, and that the owner callsign is VK2KTJ. (See the `Configuring the PMS' section below for more details). The next configuration accepts calls to VK2KTJ-1 via the radio port. It runs the node program for everybody that connects to it. The next configuration is a NetRom configuration, note the use of the greater-then and less-than braces instead of the square brackets. These denote a NetRom configuration. This configuration is simpler, it simply says that anyone connecting to our NetRom port called `netrom' will have the node program run for them, unless they have a callsign of `NOCALL' in which case they will be locked out. The last two configurations are for incoming Rose connections. The first for people who have placed calls to `vk2ktj-0' and the second for `VK2KTJ-1 at the our Rose node address. These work precisely the same way. Not the use of the curly braces to distinguish the port as a Rose port. This example is a contrived one but I think it illustrates clearly the important features of the syntax of the configuration file. The configuration file is explained fully in the ax25d.conf man page. A more detailed example is included in the ax25-utils package that might be useful to you too. 12.3. Starting ax25d When you have the two configuration files completed you start ax25d with the command: # /usr/sbin/ax25d When this is run people should be able to make AX.25 connections to your Linux machine. Remember to put the ax25d command in your rc files so that it is started automatically when you reboot each time. 13. Configuring the node software. The node software was developed by Tomi Manninen and was based on the original PMS program. It provides a fairly complete and flexible node capability that is easily configured. It allows users once they are connected to make Telnet, NetRom, Rose, and AX.25 connections out and to obtain various sorts of information such as Finger, Nodes and Heard lists etc. You can configure the node to execute any Linux command you wish fairly simply. The node would normally be invoked from the ax25d program although it is also capable of being invoked from the TCP/IP inetd program to allow users to telnet to your machine and obtain access to it, or by running it from the command line. 13.1. Creating the /etc/ax25/node.conf file. The node.conf file is where the main configuration of the node takes place. It is a simple text file and its format is as follows: # /etc/ax25/node.conf # configuration file for the node(8) program. # # Lines beginning with '#' are comments and are ignored. # Hostname # Specifies the hostname of the node machine hostname radio.gw.vk2ktj.ampr.org # Local Network # allows you to specify what is consider 'local' for the # purposes of permission checking using nodes.perms. localnet 44.136.8.96/29 # Hide Ports # If specified allows you to make ports invisible to users. The # listed ports will not be listed by the (P)orts command. hiddenports rose netrom # Node Identification. # this will appear in the node prompt NodeId LINUX:VK2KTJ-9 # NetRom port # This is the name of the netrom port that will be used for # outgoing NetRom connections from the node. NrPort netrom # Node Idle Timeout # Specifies the idle time for connections to this node in seconds. idletimout 1800 # Connection Idle Timeout # Specifies the idle timer for connections made via this node in # seconds. conntimeout 1800 # Reconnect # Specifies whether users should be reconnected to the node # when their remote connections disconnect, or whether they # should be disconnected complete. reconnect on # Command Aliases # Provide a way of making complex node commands simple. alias CONV "telnet vk1xwt.ampr.org 3600" alias BBS "connect radio vk2xsb" # Externam Command Aliases # Provide a means of executing external commands under the node. # extcmd # Flag == 1 is the only implemented function. # is formatted as per ax25d.conf extcmd PMS 1 root /usr/sbin/pms pms -u %U -o VK2KTJ # Logging # Set logging to the system log. 3 is the noisiest, 0 is disabled. loglevel 3 # The escape character # 20 = (Control-T) EscapeChar 20 13.2. Creating the /etc/ax25/node.perms file. The node allows you to assign permissions to users. These permissions allow you to determine which users should be allowed to make use of options such as the (T)elnet, and (C)onnect commands, for example, and which shouldn't. The node.perms file is where this information is stored and contains five key fields. For all fields an asterisk `*' character matches anything. This is useful for building default rules. user The first field is the callsign or user to which the permissions should apply. Any SSID value is ignored, so you should just place the base callsign here. method Each protocol or access method is also given permissions. For example you might allow users who have connected via AX.25 or NetRom to use the (C)onnect option, but prevent others, such as those who are telnet connected from a non-local node from having access to it. The second field therefore allows you to select which access method this permissions rule should apply to. The access methods allowed are: method description ------ ----------------------------------------------------------- ampr User is telnet connected from an amprnet address (44.0.0.0) ax25 User connected by AX.25 host User started node from command line inet user is telnet connected from a non-loca, non-ampr address. local User is telnet connected from a 'local' host netrom User connected by NetRom rose User connected by Rose * User connected by any means. port For AX.25 users you can control permissions on a port by port basis too if you choose. This allows you to determine what AX.25 are allowed to do based on which of your ports they have connected to. The third field contains the port name if you are using this facility. This is useful only for AX.25 connections. password You may optionally configure the node so that it prompts users to enter a password when they connect. This might be useful to help protect specially configured users who have high authority levels. If the fourth field is set then its value will be the password that will be accepted. permissions The permissions field is the final field in each entry in the file. The permissions field is coded as a bit field, with each facility having a bit value which if set allows the option to be used and if not set prevents the facility being used. The list of controllable facilities and their corresponding bit values are: value description ----- ------------------------------------------------- 1 Login allowed. 2 AX25 (C)onnects allowed. 4 NetRom (C)onnects allowed. 8 (T)elnet to local hosts allowed. 16 (T)elnet to amprnet (44.0.0.0) hosts allowed. 32 (T)elnet to non-local, non-amprnet hosts allowed. 64 Hidden ports allowed for AX.25 (C)onnects. 128 Rose (C)onnects allowed. To code the permissions value for a rule, simply take each of the permissions you want that user to have and add their values together. The resulting number is what you place in field five. A sample nodes.perms might look like: # /etc/ax25/node.perms # # The node operator is VK2KTJ, has a password of 'secret' and # is allowed all permissions by all connection methods vk2ktj * * secret 255 # The following users are banned from connecting NOCALL * * * 0 PK232 * * * 0 PMS * * * 0 # INET users are banned from connecting. * inet * * 0 # AX.25, NetRom, Local, Host and AMPR users may (C)onnect and (T)elnet # to local and ampr hosts but not to other IP addresses. * ax25 * * 159 * netrom * * 159 * local * * 159 * host * * 159 * ampr * * 159 13.3. Configuring node to run from ax25d The node program would normally be run by the ax25d program. To do this you need to add appropriate rules to the /etc/ax25/ax25d.conf file. In my configuration I wanted users to have a choice of either connecting to the node or connecting to other services. ax25d allows you to do this by cleverly creating creating port aliases. For example, given the ax25d configuration presented above, I want to configure node so that all users who connect to VK2KTJ-1 are given the node. To do this I add the following to my /etc/ax25/ax25d.conf file: [vk2ktj-1 via radio] default * * * * * 0 root /usr/sbin/node node This says that the Linux kernel code will answer any connection requests for the callsign `VK2KTJ-1' heard on the AX.25 port named `radio', and will cause the node program to be run. 13.4. Configuring node to run from inetd If you want users to be able to telnet a port on your machine and obtain access to the node you can go this fairly easily. The first thing to decide is what port users should connect to. In this example I've arbitrarily chosen port 4000, though Tomi gives details on how you could replace the normal telnet daemon with the node in his documentation. You need to modify two files. To /etc/services you should add: node 3694/tcp #OH2BNS's node software and to /etc/inetd.conf you should add: node stream tcp nowait root /usr/sbin/node node When this is done, and you have restarted the inetd program any user who telnet connects to port 3694 of your machine will be prompted to login and if configured, their password and then they will be con- nected to the node. 14. Configuring axspawn . The axspawn program is a simple program that allows AX.25 stations who connect to be logged in to your machine. It may be invoked from the ax25d program as described above in a manner similar to the node program. To allow a user to log in to your machine you should add a line similar to the following into your /etc/ax25/ax25d.conf file: default * * * * * 1 root /usr/sbin/axspawn axspawn %u If the line ends in the + character then the connecting user must hit return before they will be allowed to login. The default is to not wait. Any individual host configurations that follow this line will have the axspawn program run when they connect. When axspawn is run it first checks that the command line argument it is supplied is a legal callsign, strips the SSID, then it checks that /etc/passwd file to see if that user has an account configured. If there is an account, and the password is either "" (null) or + then the user is logged in, if there is anything in the password field the user is prompted to enter a password. If there is not an existing account in the /etc/passwd file then axspawn may be configured to automatically create one. 14.1. Creating the /etc/ax25/axspawn.conf file. You can alter the behaviour of axspawn in various ways by use of the /etc/ax25/axspawn.conf file. This file is formatted as follows: # /etc/ax25/axspawn.conf # # allow automatic creation of user accounts create yes # # guest user if above is 'no' or everything else fails. Disable with "no" guest no # # group id or name for autoaccount group ax25 # # first user id to use first_uid 2001 # # maximum user id max_uid 3000 # # where to add the home directory for the new users home /home/ax25 # # user shell shell /bin/bash # # bind user id to callsign for outgoing connects. associate yes The eight configurable characteristics of axspawn are as follows: # indicates a comment. create if this field is set to yes then axspawn will attempt to automatically create a user account for any user who connects and does not already have an entry in the /etc/passwd file. guest this field names the login name of the account that will be used for people who connect who do not already have accounts if create is set to no. This is usually ax25 or guest. group this field names the group name that will be used for any users who connect and do not already have an entry in the /etc/passwd file. first_uid this is the number of the first userid that will be automatically created for new users. max_uid this is the maximum number that will be used for the userid of new users. home this is the home (login) directory of new users. shell this is the login shell of any new users. associate this flag indicates whether outgoing AX.25 connections made by this user after they login will use their own callsign, or your stations callsign. 15. Configuring the pms The pms program is an implementation of a simple personal message system. It was originally written by Alan Cox. Dave Brown, N2RJT, has taken on further development of it. At present it is still very simple, supporting only the ability to send mail to the owner of the system and to obtain some limited system information but Dave is working to expand its capability to make it more useful. After that is done there are a couple of simple files that you should create that give users some information about the system and then you need to add appropriate entries into the ax25d.conf file so that connected users are presented with the PMS. 15.1. Create the /etc/ax25/pms.motd file. The /etc/ax25/pms.motd file contains the `message of the day' that users will be presented with after they connect and receive the usual BBS id header. The file is a simple text file, any text you include in this file will be sent to users. 15.2. Create the /etc/ax25/pms.info file. The /etc/ax25/pms.info file is also a simple text file in which you would put more detailed information about your station or configuration. This file is presented to users in response to their issuing of the Info command from the PMS> prompt. 15.3. Associate AX.25 callsigns with system users. When a connected user sends mail to an AX.25 callsign, the pms expects that callsign to be mapped, or associated with a real system user on your machine. This is described in a section of its own. 15.4. Add the PMS to the /etc/ax25/ax25d.conf file. Adding the pms to your ax25d.conf file is very simple. There is one small thing you need to think about though. Dave has added command line arguments to the PMS to allow it to handle a number of different text end-of-line conventions. AX.25 and NetRom by convention expect the end-of-line to be carriage return, linefeed while the standard unix end-of-line is just newline. So, for example, if you wanted to add an entry that meant that the default action for a connection received on an AX.25 port is to start the PMS then you would add a line that looked something like: default 1 10 5 100 5 0 root /usr/sbin/pms pms -a -o vk2ktj This simply runs the pms program, telling it that it is an AX.25 connection it is connected to and that the PMS owner is vk2ktj. Check the man page for what you should specify for other connection methods. 15.5. Test the PMS. To test the PMS, you can try the following command from the command line: # /usr/sbin/pms -u vk2ktj -o vk2ktj Substitute your own callsign for mine and this will run the pms, telling it that it is to use the unix end-of-line convention, and that user logging in is vk2ktj. You can do all the things connected users can. Additionally you might try getting some other node to connect to you to confirm that your ax25d.conf configuration works. 16. Configuring the user_call programs. The `user_call' programs are really called: ax25_call and netrom_call. They are very simple programs designed to be called from ax25d to automate network connections to remote hosts. They may of course be called from a number of other places such as shell scripts or other daemons such as the node program. They are like a very simple call program. They don't do any meddling with the data at all, so the end of line handling you'll have to worry about yourself. Let's start with an example of how you might use them. Imagine you have a small network at home and that you have one linux machine acting as your Linux radio gateway and another machine, lets say a BPQ node connected to it via an ethernet connection. Normally if you wanted radio users to be able to connect to the BPQ node they would either have to digipeat through your linux node, or connect to the node program on your linux node and then connect from it. The ax25_call program can simplify this if it is called from the ax25d program. Imagine the BPQ node has the callsign VK2KTJ-9 and that the linux machine has the AX.25/ethernet port named `bpq'. Let us also imagine the Linux gateway machine has a radio port called `radio'. An entry in the /etc/ax25/ax25d.conf that looked like: [VK2KTJ-1 via radio] default * * * * * * * root /usr/sbin/ax25_call ax25_call bpq %u vk2ktj-9 would enable users to connect direct to `VK2KTJ-1' which would actu- ally be the Linux ax25d daemon and then be automatically switched to an AX.25 connection to `VK2KTJ-9' via the `bpq' interface. There are all sorts of other possible configurations that you might try. The `netrom_call' and `rose_call' utilities work in similar ways. One amateur has used this utility to make connections to a remote BBS easier. Normally the users would have to manually enter a long connection string to make the call so he created an entry that made the BBS appear as though it were on the local network by having his ax25d proxy the connection to the remote machine. 17. Configuring the Rose Uplink and Downlink commands If you are familiar with the ROM based Rose implementation you will be familiar with the method by which AX.25 users make calls across a Rose network. If a users local Rose node has the callsign VK2KTJ-5 and the AX.25 user wants to connect to VK5XXX at remote Rose node 5050882960 then they would issue the command: c vk5xxx v vk2ktj-5 5050 882960 At the remote node, VK5XXX would see an incoming connection with the local AX.25 users callsign and being digipeated via the remote Rose nodes callsign. The Linux Rose implementation does not support this capability in the kernel, but there are two application programs called rsuplnk and rsdwnlnk which perform this function. 17.1. Configuring a Rose downlink To configure your Linux machine to accept a Rose connection and establish an AX.25 connection to any destination callsign that is not being listened for on your machine you need to add an entry to your /etc/ax25/ax25d.conf file. Normally you would configure this entry to be the default behaviour for incoming Rose connections. For example you might have Rose listeners operating for destinations like NODE-0 or HEARD-0 that you wish to handle locally, but for all other destination calls you may want to pass them to the rsdwnlink command and assume they are AX.25 users. A typical configuration would look like: # {* via rose} NOCALL * * * * * * L default * * * * * * - root /usr/sbin/rsdwnlnk rsdwnlnk 4800 vk2ktj-5 # With this configuration any user who established a Rose connection to your Linux nodes address with a destination call of something that you were not specifically listening for would be converted into an AX.25 connection on the AX.25 port named 4800 with a digipeater path of VK2KTJ-5. 17.2. Configuring a Rose uplink To configure your Linux machine to accept AX.25 connections in the same way that a ROM Rose node would you must add an entry into your /etc/ax25/ax25d.conf file that looks similar to the following: # [VK2KTJ-5* via 4800] NOCALL * * * * * * L default * * * * * * - root /usr/sbin/rsuplnk rsuplnk rose # Note the special syntax for the local callsign. The `*' character indicates that the application should be invoked if the callsign is heard in the digipeater path of a connection. This configuration would allow an AX.25 user to establish Rose calls using the example connect sequence presented in the introduction. Anybody attempting to digipeat via VK2KTJ-5 on the AX.25 port named 4800 would be handled by the rsuplnk command. 18. Associating AX.25 callsigns with Linux users. There are a number of situations where it is highly desirable to associate a callsign with a linux user account. One example might be where a number of amateur radio operators share the same linux machine and wish to use their own callsign when making calls. Another is the case of PMS users wanting to talk to a particular user on your machine. The AX.25 software provides a means of managing this association of linux user account names with callsigns. We've mentioned it once already in the PMS section, but I'm spelling it out here to be sure you don't miss it. You make the association with the axparms command. An example looks like: # axparms -assoc vk2ktj terry This command associates that AX.25 callsign vk2ktj with the user terry on the machine. So, for example, any mail for vk2ktj on the pms will be sent to Linux account terry. Remember to put these associations into your rc file so that they are available each time your reboot. Note you should never associate a callsign with the root account as this can cause configuration problems in other programs. 19. The /proc/ file system entries. The /proc filesystem contains a number of files specifically related to the AX25 and NetRom kernel software. These files are normally used by the AX52 utilities, but they are plainly formatted so you may be interested in reading them. The format is fairly easily understood so I don't think much explanation will be necessary. /proc/net/arp contains the list of Address Resolution Protocol mappings of IP addresses to MAC layer protocol addresses. These can can AX.25, ethernet or some other MAC layer protocol. /proc/net/ax25 contains a list of AX.25 sockets opened. These might be listening for a connection, or active sessions. /proc/net/ax25_bpqether contains the AX25 over ethernet BPQ style callsign mappings. /proc/net/ax25_calls contains the linux userid to callsign mappings set my the axparms -assoc command. /proc/net/ax25_route contains AX.25 digipeater path information. /proc/net/nr contains a list of NetRom sockets opened. These might be listening for a connection, or active sessions. /proc/net/nr_neigh contains information about the NetRom neighbours known to the NetRom software. /proc/net/nr_nodes contains information about the NetRom nodes known to the NetRom software. /proc/net/rose contains a list of Rose sockets opened. These might be listening for a connection, or active sessions. /proc/net/rose_nodes contains a mapping of Rose destinations to Rose neighbours. /proc/net/rose_neigh contains a list of known Rose neighbours. /proc/net/rose_routes contains a list of all established Rose connections. 20. AX.25, NetRom, Rose network programming. Probably the biggest advantage of using the kernel based implementations of the amateur packet radio protocols is the ease with which you can develop applications and programs to use them. While the subject of Unix Network Programming is outside the scope of this document I will describe the elementary details of how you can make use of the AX.25, NetRom and Rose protocols within your software. 20.1. The address families. Network programming for AX.25, NetRom and Rose is quite similar to programming for TCP/IP under Linux. The major differences being the address families used, and the address structures that need to be mangled into place. The address family names for AX.25, NetRom and Rose are AF_AX25, AF_NETROM and AF_ROSE respectively. 20.2. The header files. You must always include the `ax25.h' header file, and also the `netrom.h' or `rose.h' header files if you are dealing with those protocols. Simple top level skeletons would look something like the following: For AX.25: #include int s, addrlen = sizeof(struct full_sockaddr_ax25); struct full_sockaddr_ax25 sockaddr; sockaddr.fsa_ax25.sax25_family = AF_AX25 For NetRom: #include #include int s, addrlen = sizeof(struct full_sockaddr_ax25); struct full_sockaddr_ax25 sockaddr; sockaddr.fsa_ax25.sax25_family = AF_NETROM; For Rose: #include #include int s, addrlen = sizeof(struct sockaddr_rose); struct sockaddr_rose sockaddr; sockaddr.srose_family = AF_ROSE; 20.3. Callsign mangling and examples. There are routines within the lib/ax25.a library built in the AX25 utilities package that manage the callsign conversions for you. You can write your own of course if you wish. The user_call utilities are excellent examples from which to work. The source code for them is included in the AX25 utilities package. If you spend a little time working with those you will soon see that ninety percent of the work is involved in just getting ready to open the socket. Actually making the connection is easy, the preparation takes time. The example are simple enough to not be very confusing. If you have any questions, you should feel to direct them to the linux-hams mailing list and someone there will be sure to help you. 21. Some sample configurations. Following are examples of the most common types of configurations. These are guides only as there are as many ways of configuring your network as there are networks to configure, but they may give you a start. 21.1. Small Ethernet LAN with Linux as a router to Radio LAN Many of you may have small local area networks at home and want to connect the machines on that network to your local radio LAN. This is the type of configuration I use at home. I arranged to have a suitable block of addresses allocated to me that I could capture in a single route for convenience and I use these on my Ethernet LAN. Your local IP coordinator will assist you in doing this if you want to try it as well. The addresses for the Ethernet LAN form a subset of the radio LAN addresses. The following configuration is the actual one for my linux router on my network at home: . . . . . . --- . | Network /---------\ . Network | 44.136.8.96/29| | . 44.136.8/24 \ | / | | Linux | . \|/ | | | . | | eth0 | Router | . /-----\ /----------\ | |---------------| |-----| TNC |----| Radio |---/ | 44.136.8.97 | and | . \-----/ \----------/ | | | sl0 | | Server | 44.136.8.5 | | | . | | | . | \_________/ . --- . . . . . . #!/bin/sh # /etc/rc.net # This configuration provides one KISS based AX.25 port and one # Ethernet device. echo "/etc/rc.net" echo " Configuring:" echo -n " loopback:" /sbin/ifconfig lo 127.0.0.1 /sbin/route add 127.0.0.1 echo " done." echo -n " ethernet:" /sbin/ifconfig eth0 44.136.8.97 netmask 255.255.255.248 \ broadcast 44.136.8.103 up /sbin/route add 44.136.8.97 eth0 /sbin/route add -net 44.136.8.96 netmask 255.255.255.248 eth0 echo " done." echo -n " AX.25: " kissattach -i 44.136.8.5 -m 512 /dev/ttyS1 4800 ifconfig sl0 netmask 255.255.255.0 broadcast 44.136.8.255 route add -host 44.136.8.5 sl0 route add -net 44.136.8.0 window 1024 sl0 echo -n " Netrom: " nrattach -i 44.136.8.5 netrom echo " Routing:" /sbin/route add default gw 44.136.8.68 window 1024 sl0 echo " default route." echo done. # end /etc/ax25/axports # name callsign speed paclen window description 4800 VK2KTJ-0 4800 256 2 144.800 MHz /etc/ax25/nrports # name callsign alias paclen description netrom VK2KTJ-9 LINUX 235 Linux Switch Port /etc/ax25/nrbroadcast # ax25_name min_obs def_qual worst_qual verbose 4800 1 120 10 1 o You must have IP_FORWARDING enabled in your kernel. o The AX.25 configuration files are pretty much those used as examples in the earlier sections, refer to those where necessary. o I've chosen to use an IP address for my radio port that is not within my home network block. I needn't have done so, I could have easily used 44.136.8.97 for that port too. o 44.136.8.68 is my local IPIP encapsulated gateway and hence is where I point my default route. o Each of the machines on my Ethernet network have a route: route add -net 44.0.0.0 netmask 255.0.0.0 \ gw 44.136.8.97 window 512 mss 512 eth0 The use of the mss and window parameters means that I can get optimum performance from both local Ethernet and radio based connections. o I also run my smail, http, ftp and other daemons on the router machine so that it needs to be the only machine to provide others with facilities. o The router machine is a lowly 386DX20 with a 20Mb harddrive and a very minimal linux configuration. 21.2. IPIP encapsulated gateway configuration. Linux is now very commonly used for TCP/IP encapsulated gateways around the world. The new tunnel driver supports multiple encapsulated routes and makes the older ipip daemon obsolete. A typical configuration would look similar to the following. . . . . . . --- . | Network /---------\ . Network | 154.27.3/24 | | . 44.136.16/24 \ | / | | Linux | . \|/ | | | . | | eth0 | IPIP | . /-----\ /----------\ | ---|---------------| |-----| TNC |----| Radio |---/ | 154.27.3.20 | Gateway | . \-----/ \----------/ | | | sl0 | | | 44.136.16.1 | | | . | | | . | \_________/ . --- . . . . . . The configuration files of interest are: # /etc/rc.net # This file is a simple configuration that provides one KISS AX.25 # radio port, one Ethernet device, and utilises the kernel tunnel driver # to perform the IPIP encapsulation/decapsulation # echo "/etc/rc.net" echo " Configuring:" # echo -n " loopback:" /sbin/ifconfig lo 127.0.0.1 /sbin/route add 127.0.0.1 echo " done." # echo -n " ethernet:" /sbin/ifconfig eth0 154.27.3.20 netmask 255.255.255.0 \ broadcast 154.27.3.255 up /sbin/route add 154.27.3.20 eth0 /sbin/route add -net 154.27.3.0 netmask 255.255.255.0 eth0 echo " done." # echo -n " AX.25: " kissattach -i 44.136.16.1 -m 512 /dev/ttyS1 4800 /sbin/ifconfig sl0 netmask 255.255.255.0 broadcast 44.136.16.255 /sbin/route add -host 44.136.16.1 sl0 /sbin/route add -net 44.136.16.0 netmask 255.255.255.0 window 1024 sl0 # echo -n " tunnel:" /sbin/ifconfig tunl0 44.136.16.1 mtu 512 up # echo done. # echo -n "Routing ... " source /etc/ipip.routes echo done. # # end. and: # /etc/ipip.routes # This file is generated using the munge script # /sbin/route add -net 44.134.8.0 netmask 255.255.255.0 tunl0 gw 134.43.26.1 /sbin/route add -net 44.34.9.0 netmask 255.255.255.0 tunl0 gw 174.84.6.17 /sbin/route add -net 44.13.28.0 netmask 255.255.255.0 tunl0 gw 212.37.126.3 ... ... ... /etc/ax25/axports # name callsign speed paclen window description 4800 VK2KTJ-0 4800 256 2 144.800 MHz Some points to note here are: o The new tunnel driver uses the gw field in the routing table in place of the pointopoint parameter to specify the address of the remote IPIP gateway. This is why it now supports multiple routes per interface. o You can configure two network devices with the same address. In this example both the sl0 and the tunl0 devices have been configured with the IP address of the radio port. This is done so that the remote gateway sees the correct address from your gateway in encapsulated datagrams sent to it. o The route commands used to specify the encapsulated routes can be automatically generated by a modified version of the munge script. This is included below. The route commands would then be written to a separate file and read in using the bash source /etc/ipip.routes command (assuming you called the file with the routing commands /etc/ipip.routes) as illustrated. The source file must be in the NOS route command format. o Note the use of the window argument on the route command. Setting this parameter to an appropriate value improves the performance of your radio link. The new tunnel-munge script: #!/bin/sh # # From: Ron Atkinson # # This script is basically the 'munge' script written by Bdale N3EUA # for the IPIP daemon and is modified by Ron Atkinson N8FOW. It's # purpose is to convert a KA9Q NOS format gateways route file # (usually called 'encap.txt') into a Linux routing table format # for the IP tunnel driver. # # Usage: Gateway file on stdin, Linux route format file on stdout. # eg. tunnel-munge < encap.txt > ampr-routes # # NOTE: Before you use this script be sure to check or change the # following items: # # 1) Change the 'Local routes' and 'Misc user routes' sections # to routes that apply to your own area (remove mine please!) # 2) On the fgrep line be sure to change the IP address to YOUR # gateway Internet address. Failure to do so will cause serious # routing loops. # 3) The default interface name is 'tunl0'. Make sure this is # correct for your system. echo "#" echo "# IP tunnel route table built by $LOGNAME on `date`" echo "# by tunnel-munge script v960307." echo "#" echo "# Local routes" echo "route add -net 44.xxx.xxx.xxx netmask 255.mmm.mmm.mmm dev sl0" echo "#" echo "# Misc user routes" echo "#" echo "# remote routes" fgrep encap | grep "^route" | grep -v " XXX.XXX.XXX.XXX" | \ awk '{ split($3, s, "/") split(s[1], n,".") if (n[1] == "") n[1]="0" if (n[2] == "") n[2]="0" if (n[3] == "") n[3]="0" if (n[4] == "") n[4]="0" if (s[2] == "1") mask="128.0.0.0" else if (s[2] == "2") mask="192.0.0.0" else if (s[2] == "3") mask="224.0.0.0" else if (s[2] == "4") mask="240.0.0.0" else if (s[2] == "5") mask="248.0.0.0" else if (s[2] == "6") mask="252.0.0.0" else if (s[2] == "7") mask="254.0.0.0" else if (s[2] == "8") mask="255.0.0.0" else if (s[2] == "9") mask="255.128.0.0" else if (s[2] == "10") mask="255.192.0.0" else if (s[2] == "11") mask="255.224.0.0" else if (s[2] == "12") mask="255.240.0.0" else if (s[2] == "13") mask="255.248.0.0" else if (s[2] == "14") mask="255.252.0.0" else if (s[2] == "15") mask="255.254.0.0" else if (s[2] == "16") mask="255.255.0.0" else if (s[2] == "17") mask="255.255.128.0" else if (s[2] == "18") mask="255.255.192.0" else if (s[2] == "19") mask="255.255.224.0" else if (s[2] == "20") mask="255.255.240.0" else if (s[2] == "21") mask="255.255.248.0" else if (s[2] == "22") mask="255.255.252.0" else if (s[2] == "23") mask="255.255.254.0" else if (s[2] == "24") mask="255.255.255.0" else if (s[2] == "25") mask="255.255.255.128" else if (s[2] == "26") mask="255.255.255.192" else if (s[2] == "27") mask="255.255.255.224" else if (s[2] == "28") mask="255.255.255.240" else if (s[2] == "29") mask="255.255.255.248" else if (s[2] == "30") mask="255.255.255.252" else if (s[2] == "31") mask="255.255.255.254" else mask="255.255.255.255" if (mask == "255.255.255.255") printf "route add -host %s.%s.%s.%s gw %s dev tunl0\n"\ ,n[1],n[2],n[3],n[4],$5 else printf "route add -net %s.%s.%s.%s gw %s netmask %s dev tunl0\n"\ ,n[1],n[2],n[3],n[4],$5,mask }' echo "#" echo "# default the rest of amprnet via mirrorshades.ucsd.edu" echo "route add -net 44.0.0.0 gw 128.54.16.18 netmask 255.0.0.0 dev tunl0" echo "#" echo "# the end" 21.3. AXIP encapsulated gateway configuration Many Amateur Radio Internet gateways encapsulate AX.25, NetRom and Rose in addition to tcp/ip. Encapsulation of AX.25 frames within IP datagrams is described in RFC-1226 by Brian Kantor. Mike Westerhof wrote an implementation of an AX.25 encapsulation daemon for unix in 1991. The ax25-utils package includes a marginally enhanced version of it for Linux. An AXIP encapsulation program accepts AX.25 frames at one end, looks at the destination AX.25 address to determine what IP address to send them to, encapsulates them in a tcp/ip datagram and then transmits them to the appropriate remote destination. It also accepts tcp/ip datagrams that contain AX.25 frames, unwraps them and processes them as if it had received them directly from an AX.25 port. To distinguish IP datagrams containing AX.25 frames from other IP datagrams which don't, AXIP datagrams are coded with a protocol id of 4 (or 94 which is now deprecated). This process is described in RFC-1226. The ax25ipd program included in the ax25-utils package presents itself as a program supporting a KISS interface across which you pass AX.25 frames, and an interface into the tcp/ip protocols. It is configured with a single configuration file called /etc/ax25/ax25ipd.conf. 21.3.1. AXIP configuration options. The ax25ipd program has two major modes of operation. "digipeater" mode and "tnc" mode. In "tnc" mode the daemon is treated as though it were a kiss TNC, you pass KISS encapsulated frames to it and it will transmit them, this is the usual configuration. In "digipeater" mode, you treat the daemon as though it were an AX.25 digipeater. There are subtle differences between these modes. In the configuration file you configure "routes" or mappings between destination AX.25 callsigns and the IP addresses of the hosts that you want to send the AX.25 packets too. Each route has options which will be explained later. Other options that are configured here are the tty that the ax25ipd daemon will open and its speed (usually one end of a pipe) what callsign you want to use in "digipeater" mode beacon interval and text whether you want to encapsulate the AX.25 frames in IP datagrams or in UDP/IP datagrams. Nearly all AXIP gateways use IP encapsulation, but some gateways are behind firewalls that will not allow IP with the AXIP protocol id to pass and are forced to use UDP/IP. Whatever you choose must match what the tcp/ip host at the other end of the link is using. 21.3.2. A typical /etc/ax25/ax25ipd.conf file. # # ax25ipd configuration file for station floyd.vk5xxx.ampr.org # # Select axip transport. 'ip' is what you want for compatibility # with most other gateways. # socket ip # # Set ax25ipd mode of operation. (digi or tnc) # mode tnc # # If you selected digi, you must define a callsign. If you selected # tnc mode, the callsign is currently optional, but this may change # in the future! (2 calls if using dual port kiss) # #mycall vk5xxx-4 #mycall2 vk5xxx-5 # # In digi mode, you may use an alias. (2 for dual port) # #myalias svwdns #myalias2 svwdn2 # # Send an ident every 540 seconds ... # #beacon after 540 #btext ax25ip -- tncmode rob/vk5xxx -- Experimental AXIP gateway # # Serial port, or pipe connected to a kissattach in my case # device /dev/ttyq0 # # Set the device speed # speed 9600 # # loglevel 0 - no output # loglevel 1 - config info only # loglevel 2 - major events and errors # loglevel 3 - major events, errors, and AX25 frame trace # loglevel 4 - all events # log 0 for the moment, syslog not working yet ... # loglevel 2 # # If we are in digi mode, we might have a real tnc here, so use param to # set the tnc parameters ... # #param 1 20 # # Broadcast Address definition. Any of the addresses listed will be forwarded # to any of the routes flagged as broadcast capable routes. # broadcast QST-0 NODES-0 # # ax.25 route definition, define as many as you need. # format is route (call/wildcard) (ip host at destination) # ssid of 0 routes all ssid's # # route [flags] # # Valid flags are: # b - allow broadcasts to be transmitted via this route # d - this route is the default route # route vk2sut-0 44.136.8.68 b route vk5xxx 44.136.188.221 b route vk2abc 44.1.1.1 # # 21.3.3. Running ax25ipd Create your /etc/ax25/axports entry: # /etc/ax25/axports # axip VK2KTJ-13 9600 256 AXIP port # Run the kissattach command to create that port: /usr/sbin/kissattach /dev/ptyq0 axip Run the ax25ipd program: /usr/sbin/ax25ipd & Test the AXIP link: call axip vk5xxx 21.3.4. Some notes about the routes and route flags The "route" command is where you specify where you want your AX.25 packets encapsulated and sent to. When the ax25ipd daemon receives a packet from its interface, it compares the destination callsign with each of the callsigns in its routing table. If if finds a match then the ax.25 packet is encapsulated in an IP datagram and then transmitted to the host at the specified IP address. There are two flags you can add to any of the route commands in the ax25ipd.conf file. The two flags are: b traffic with a destination address matching any of those on the list defined by the "broadcast" keyword should be transmitted via this route. d any packets not matching any route should be transmitted via this route. The broadcast flag is very useful, as it enables informations that is normally destined for all stations to a number of AXIP destinations. Normally axip routes are point-to-point and unable to handle 'broadcast' packets. 21.4. Linking NOS and Linux using a pipe device Many people like to run some version of NOS under Linux because it has all of the features and facilities they are used to. Most of those people would also like to have the NOS running on their machine capable of talking to the Linux kernel so that they can offer some of the linux capabilities to radio users via NOS. Brandon S. Allbery, KF8NH, contributed the following information to explain how to interconnect the NOS running on a Linux machine with the kernel code using the Linux pipe device. Since both Linux and NOS support the slip protocol it is possible to link the two together by creating a slip link. You could do this by using two serial ports with a loopback cable between them, but this would be slow and costly. Linux provides a feature that many other Unix-like operating systems provide called `pipes'. These are special pseudo devices that look like a standard tty device to software but in fact loopback to another pipe device. To use these pipes the first program must open the master end of the pipe, and the open then the second program can open the slave end of the pipe. When both ends are open the programs can communicate with each other simply by writing characters to the pipes in the way they would if they were terminal devices. To use this feature to connect the Linux Kernel and a copy of NOS, or some other program you first must choose a pipe device to use. You can find one by looking in your /dev directory. The master end of the pipes are named: ptyq[1-f] and the slave end of the pipes are known as: ttyq[1-f]. Remember they come in pairs, so if you select /dev/ptyqf as your master end then you must use /dev/ttyqf as the slave end. Once you have chosen a pipe device pair to use you should allocate the master end to you linux kernel and the slave end to the NOS program, as the Linux kernel starts first and the master end of the pipe must be opened first. You must also remember that your Linux kernel must have a different IP address to your NOS, so you will need to allocate a unique address for it if you haven't already. You configure the pipe just as if it were a serial device, so to create the slip link from your linux kernel you can use commands similar to the following: # /sbin/slattach -s 38400 -p slip /dev/ptyqf & # /sbin/ifconfig sl0 broadcast 44.255.255.255 pointopoint 44.70.248.67 / mtu 1536 44.70.4.88 # /sbin/route add 44.70.248.67 sl0 # /sbin/route add -net 44.0.0.0 netmask 255.0.0.0 gw 44.70.248.67 In this example the Linux kernel has been given IP address 44.70.4.88 and the NOS program is using IP address 44.70.248.67. The route command in the last line simply tells your linux kernel to route all datagrams for the amprnet via the slip link created by the slattach command. Normally you would put these commands into your /etc/rc.d/rc.inet2 file after all your other network configuration is complete so that the slip link is created automatically when you reboot. Note: there is no advantage in using cslip instead of slip as it actually reduces performance because the link is only a virtual one and occurs fast enough that having to compress the headers first takes longer than transmitting the uncompressed datagram. To configure the NOS end of the link you could try the following: # you can call the interface anything you want; I use "linux" for convenience. attach asy ttyqf - slip linux 1024 1024 38400 route addprivate 44.70.4.88 linux These commands will create a slip port named `linux' via the slave end of the pipe device pair to your linux kernel, and a route to it to make it work. When you have started NOS you should be able to ping and telnet to your NOS from your Linux machine and vice versa. If not, double check that you have made no mistakes especially that you have the addresses configured properly and have the pipe devices around the right way. 22. Where do I find more information about .... ? Since this document assumes you already have some experience with packet radio and that this might not be the case I've collected a set of references to other information that you might find useful. 22.1. Packet Radio You can get general information about Packet Radio from these sites: American Radio Relay League , Radio Amateur Teleprinter Society Tucson Amateur Packet Radio Group 22.2. Protocol Documentation AX.25, NetRom - Jonathon Naylor has collated a variety of documents that relate to the packet radio protocols themselves. This documentation has been packaged up into ax25-doc-1.0.tar.gz 22.3. Hardware Documentation Information on the PI2 Card is provided by the Ottawa Packet Radio Group . Information on Baycom hardware is available at the Baycom Web Page . 23. Discussion relating to Amateur Radio and Linux. There are various places that discussion relating to Amateur Radio and Linux take place. They take place in the comp.os.linux.* newsgroups, they also take place on the HAMS list on vger.rutgers.edu. Other places where they are held include the tcp-group mailing list at ucsd.edu (the home of amateur radio TCP/IP discussions), and you might also try the #linpeople channel on the linuxnet irc network. To join the Linux linux-hams channel on the mail list server, send mail to: Majordomo@vger.rutgers.edu with the line: subscribe linux-hams in the message body. The subject line is ignored. The linux-hams mailing list is archived at: zone.pspt.fi and zone.oh7rba.ampr.org . Please use the archives when you are first starting, because many common questions are answered there. To join the tcp-group send mail to: listserver@ucsd.edu with the line: subscribe tcp-group in the body of the text. Note: Please remember that the tcp-group is primarily for discussion of the use of advanced protocols, of which TCP/IP is one, in Amateur Radio. Linux specific questions should not ordinarily go there. 24. Acknowledgements. The following people have contributed to this document in one way or another, knowingly or unknowingly. In no particular order (as I find them): Jonathon Naylor, Thomas Sailer, Joerg Reuter, Ron Atkinson, Alan Cox, Craig Small, John Tanner, Brandon Allbery, Hans Alblas, Klaus Kudielka, Carl Makin. 25. Copyright. The AX25-HOWTO, information on how to install and configure some of the more important packages providing AX25 support for Linux. Copyright (c) 1996 Terry Dawson. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the: Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. Linux Access HOWTO Michael De La Rue, v2.11, 28 March 1997 The Linux Access HOWTO covers the use of adaptive technology with Linux, In particular, using adaptive technology to make Linux accessi- ble to those who could not use it otherwise. It also covers areas where Linux can be used within more general adaptive technology solu- tions. ______________________________________________________________________ Table of Contents 1. Introduction 1.1 Distribution Policy 2. Comparing Linux with other Operating Systems 2.1 General Comparison 2.2 Availability of Adaptive Technology 2.3 Inherent Usability 3. Visually Impaired 3.1 Seeing the Screen with Low Vision 3.1.1 SVGATextMode 3.1.2 X Window System 3.1.2.1 Different Screen Resolutions 3.1.2.2 Screen Magnification 3.1.2.3 Change Screen Font 3.1.2.4 Cross Hair Cursors etc.. 3.1.3 Audio 3.1.4 Producing Large Print 3.1.4.1 LaTeX / TeX 3.1.5 Outputting Large Text 3.2 Aids for Those Who Can't Use Visual Output 3.2.1 Braille Terminals 3.2.2 Speech Synthesis 3.2.3 Handling Console Output 3.2.4 Optical Character Recognition 3.3 Beginning to Learn Linux 3.4 Braille Embossing 4. Hearing Problems 4.1 Visual Bells 5. Physical Problems 5.1 Unable to Use a Mouse/Pointer 5.1.1 Unable to Use a Keyboard 5.1.1.1 Other Input Hardware (X Windows System only) 5.1.2 Controlling Physical Hardware From Linux 5.2 Speech Recognition 5.3 Making the Keyboard Behave 5.3.1 X Window System. 5.3.2 Getting Rid of Auto Repeat 5.3.3 Macros / Much input, few key presses 5.3.4 Sticky Keys 6. General Programming Issues 6.1 Try to Make it Easy to Provide Multiple Interfaces 6.2 Make software configurable. 6.3 Test the Software on Users. 6.4 Make Output Distinct 6.5 Licenses 7. Other Information 7.1 Linux Documentation 7.1.1 The Linux Info Sheet 7.1.2 The Linux Meta FAQ 7.1.3 The Linux Software Map 7.1.4 The Linux HOWTO documents 7.1.5 The Linux FAQ 7.2 Mailing Lists 7.2.1 The Linux Access List 7.2.2 The Linux Blind List 7.3 WWW References 7.4 Suppliers 7.5 Manufacturers 7.5.1 Alphavision 7.5.1.1 Linux Supported Alphavision AT Products 7.5.2 Blazie Engineering 7.5.2.1 Blazie AT Products 7.5.3 Digital Equipment Corporation 7.5.3.1 Linux Supported DEC AT Products 7.5.4 Kommunikations-Technik Stolper GmbH 7.5.4.1 Linux Supported KTG AT Products 8. Software Packages 8.1 Emacspeak 8.2 BRLTTY 8.3 Screen 8.4 Rsynth 8.5 xocr 8.6 xzoom 8.7 NFBtrans 8.7.1 Compiling NFBtrans on Linux 8.8 UnWindows 8.8.1 dynamag 8.8.2 coloreyes 8.8.3 border 8.8.4 un-twm 9. Hardware 9.1 Braille terminals driven from Screen Memory 9.1.1 Braillex 9.1.2 Brailloterm 9.1.3 Patching the Kernel for Braillex and Brailloterm 9.2 Software Driven Braille Terminals 9.2.1 Tieman B.V. 9.2.1.1 CombiBraille 9.2.2 Alva B.V. 9.2.3 Telesensory Systems Inc. displays 9.2.3.1 Powerbraille 9.2.3.2 Navigator 9.2.4 Braille Lite 9.3 Speech Synthesisers 9.3.1 DECTalk Express 9.3.2 Accent SA 9.3.3 SPO256-AL2 Speak and Spell chip. 10. Acknowledgements ______________________________________________________________________ 1. Introduction The aim of this document is to serve as an introduction to the technologies which are available to make Linux usable by people who, through some disability would otherwise have problems with it. In other words the target groups of the technologies are, the blind, the partially sighted, deaf and the physically disabled. As any other technologies or pieces of information are discovered they will be added. The information here not just for these people (although that is probably the main aim) but also to allow developers of Linux to become aware of the difficulties involved here. Possibly the biggest problem is that, right now, very few of the developers of Linux are aware of the issues and various simple ways to make life simpler for implementors of these systems. This has, however, changed noticeably since the introduction of this document, and at least to a small extent because of this document, but also to a large extent due to the work of some dedicated developers, many of whom are mentioned in the document's Acknowledgements. Please send any comments or extra information or offers of assistance to This address might become a mailing list in future, or be automatically handed over to a future maintainer of the HOWTO, so please don't use it for personal email. I don't have time to follow developments in all areas. I probably won't even read a mail until I have time to update this document. It's still gratefully received. If a mail is sent to the blind-list or the access-list, I will eventually read it and put any useful information into the document. Otherwise, please send a copy of anything interesting to the above email address. Normal mail can be sent to Linux Access HOWTO 23 Kingsborough Gardens Glasgow G12 9NH Scotland U.K. And will gradually make its way round the world to me. Email will be faster by weeks. I can be personally contacted using . Since I use mail filtering on all mail I receive, please use the other address except for personal email. This is most likely to lead to an appropriate response. 1.1. Distribution Policy The ACCESS-HOWTO is copyrighted (c) 1996 Michael De La Rue The ACCESS-HOWTO may be distributed, at your choice, under either the terms of the GNU Public License version 2 or later or the standard Linux Documentation project terms. These licenses should be available from where you got this document. Please note that since the LDP terms don't allow modification (other than translation), modified ver- sions can be assumed to be distributed under the GPL. 2. Comparing Linux with other Operating Systems 2.1. General Comparison The best place to find out about this is in such documents as the `Linux Info Sheet', `Linux Meta FAQ' and `Linux FAQ' (see ``Linux Documentation''). Major reasons for a visually impaired person to use Linux would include it's inbuilt networking which gives full access to the Internet. More generally, users are attracted by the full development environment included. Also, unlike most other modern GUI environments, the graphical front end to Linux (X Windows) is clearly separated from the underlying environment and there is a complete set of modern programs such as World Wide Web browsers and fax software which work directly in the non graphical environment. This opens up the way to provide alternative access paths to the systems functionality; Emacspeak is a good example. For other users, the comparison is probably less favourable and less clear. People with very specific and complex needs will find that the full development system included allows properly customised solutions. However, much of the software which exists on other systems is only just beginning to become available. More development is being done however in almost all directions. 2.2. Availability of Adaptive Technology There is almost nothing commercial available specifically for Linux. There is a noticeable amount of free software which would be helpful in adaptation, for example, a free speech synthesiser and some free voice control software. There are also a number of free packages which provide good support for certain Braille terminals, for example. 2.3. Inherent Usability Linux has the vast advantage over Windows that most of it's software is command line oriented. This is now changing and almost everything is now available with a graphical front end. However, because it is in origin a programmers operating system, line oriented programs are still being written covering almost all new areas of interest. For the physically disabled, this means that it is easy to build custom programs to suit their needs. For the visually impaired, this should make use with a speech synthesiser or Braille terminal easy and useful for the foreseeable future. Linux's multiple virtual consoles system make it practical to use as a multi-tasking operating system by a visually impaired person working directly through Braille. The windowing system used by Linux (X11) comes with many programming tools, and should be adaptable. However, in practice, the adaptive programs available up till now have been more primitive than those on the Macintosh or Windows. They are, however, completely free (as opposed to hundreds of pounds) and the quality is definitely improving. In principle it should be possible to put together a complete, usable Linux system for a visually impaired person for about $500 (cheap & nasty PC + sound card). This compares with many thousands of dollars for other operating systems (screen reader software/ speech synthesiser hardware). I have yet to see this. I doubt it would work in practice because the software speech synthesisers available for Linux aren't yet sufficiently good. For a physically disabled person, the limitation will still be the expense of input hardware. 3. Visually Impaired I'll use two general categories here. People who are partially sighted and need help seeing / deciphering / following the text and those who are unable to use any visual interface whatsoever. 3.1. Seeing the Screen with Low Vision There are many different problems here. Often magnification can be helpful, but that's not the full story. Sometimes people can't track motion, sometimes people can't find the cursor unless it moves. This calls for a range of techniques, the majority of which are only just being added to X. 3.1.1. SVGATextMode This program is useful for improving the visibility of the normal text screen that Linux provides. The normal screen that Linux provides shows 80 characters across by 25 vertically. This can be changed (and the quality of those characters improved) using SVGATextMode. The program allows full access to the possible modes of an SVGA graphics card. For example, the text can be made larger so that only 50 by 15 characters appear on the screen. There isn't any easy way to zoom in on sections of a screen, but you can resize when needed. 3.1.2. X Window System For people who can see the screen there are a large number of ways of improving X. They don't add up to a coherent set of features yet, but if set up correctly could solve many problems. 3.1.2.1. Different Screen Resolutions The X server can be set up with many different resolutions. A single key press can then change between them allowing difficult to read text to be seen. In the file /etc/XF86Config, you have an entry in the Screen section with a line beginning with modes. If, for example, you set this to Modes "1280x1024" "1024x768" "800x600" "640x480" "320x240" with each mode set up correctly (which requires a reasonably good mon- itor for the highest resolution mode), you will be able to have four times screen magnification, switching between the different levels using Ctrl+Alt+Keypad-Plus and Ctrl+Alt+Keypad-Minus Moving the mouse around the screen will scroll you to different parts of the screen. For more details on how to set this up you should see the documentation which comes with the XFree86 X server. 3.1.2.2. Screen Magnification There are several known screen magnification programs, xmag which will magnify a portion of the screen as much as needed but is very primitive. Another one is xzoom. Previously I said that there had to be something better than xmag, well this is it. See section ``xzoom''. Another program which is available is puff. This is specifically oriented towards visually impaired users. It provides such features as a box around the pointer which makes it easier to locate. Other interesting features of puff are that, if correctly set up, it is able to select and magnify portions of the screen as they are updated. However, there seem to be interacations between xpuff and the window manager which could make it difficult to use. When used with my fvwm setup, it didn't respond at all to key presses. However using twm improved the situation. The final program which I have seen working is dynamag. This again has some specific advantages such as the ability to select a specific area of the screen and monitor it, refreshing the magnified display at regular intervals between a few tenths of a second at twenty seconds. dynamag is part of the UnWindows distribution. See ``UnWindows'' for more details. 3.1.2.3. Change Screen Font The screen fonts all properly written X software should be changeable. You can simply make it big enough for you to read. This is generally accomplished by putting a line the file .Xdefaults which should be in your home directory. By putting the correct lines in this you can change the fonts of your programs, for example Emacs.font: -sony-fixed-medium-r-normal--16-150-75-75-c-80-iso8859-* To see what fonts are available, use the program xfontsel under X. There should be some way of changing things at a more fundamental level so that everything comes out with a magnified font. This could be done by renaming fonts, and by telling telling font generating programs to use a different level of scaling. If someone gets this to work properly, please send me the details of how you did it. 3.1.2.4. Cross Hair Cursors etc.. For people that have problems following cursors there are many things which can help; o cross-hair cursors (horizontal and vertical lines from the edge of the screen) o flashing cursors (flashes when you press a key) No software I know of specifically provides a cross hair cursor. puff, mentioned in the previous section does however provide a flashing box around the cursor which can make it considerably easier to locate. For now the best that can be done is to change the cursor bitmap. Make a bitmap file as you want it, and another one which is the same size, but completely black. Convert them to the XBM format and run xsetroot -cursor cursorfile.xbm black-file.xbm actually, if you understand masks, then the black-file doesn't have to be completely black, but start with it like that. The .Xdefaults file controls cursors used by actual applications. For much more information, please see the X Big Cursor mini-HOWTO, by Joerg Schneider . 3.1.3. Audio Provided that the user can hear, audio input can be very useful for making a more friendly and communicative computing environment. For a person with low vision, audio clues can be used to help locate the pointer (see ``UnWindows''). For a console mode user using Emacspeak (see ``Emacspeak''), the audio icons available will provide very many useful facilities. Setting up Linux audio is covered in the Linux Sound HOWTO (see ``Linux Documentation''). Once sound is set up, sounds can be played with the play command which is included with most versions of Linux. This is the way to use my version of UnWindows. 3.1.4. Producing Large Print Using large print with Linux is quite easy. There are several techniques. 3.1.4.1. LaTeX / TeX LaTeX is an extremely powerful document preparation system. It may be used to produce large print documents of almost any nature. Though somewhat complicated to learn, many documents are produced using LaTeX or the underlying typesetting program, TeX. this will produce some reasonably large text \font\magnifiedtenrm=cmr10 at 20pt % setup a big font \magnifiedtenrm this is some large text \bye For more details, see the LaTeX book which is available in any computer book shop. There are also a large number of introductions available on the internet. 3.1.5. Outputting Large Text Almost all Linux printing uses postscript, and Linux can drive almost any printer using it. I output large text teaching materials using a standard Epson dot matrix printer. For users of X, there are various tools available which can produce large Text. These include LyX, and many commercial word processors. 3.2. Aids for Those Who Can't Use Visual Output For someone who is completely unable to use a normal screen there are two alternatives Braille and Speech. Obviously for people who also have hearing loss, speech isn't always useful, so Braille will always be important. If you can choose, which should you choose? This is a matter of `vigorous' debate. Speech is rapid to use, reasonably cheap and especially good for textual applications (e.g. reading a long document like this one). Problems include needing a quiet environment, possibly needing headphones to work without disturbing others and avoid being listened in on by them (not available for all speech synthesisers). Braille is better for applications where precise layout is important (e.g. spreadsheets). Also can be somewhat more convenient if you want to check the beginning of a sentence when you get to the end. Braille is, however, much more expensive and slower for reading text. Obviously, the more you use Braille, the faster you get. Grade II Braille is difficult to learn, but is almost certainly worth it since it is much faster. This means that if you don't use Braille for a fair while you can never discover its full potential and decide for yourself. Anyway, enough said on this somewhat controversial topic. based on original by James Bowden 3.2.1. Braille Terminals Braille terminals are normally a line or two of Braille. Since these are at most 80 characters wide and normally 40 wide, they are somewhat limited. I know of two kinds o Hardware driven Braille terminals. o Software driven Braille terminals. The first kind works only when the computer is in text mode and reads the screen memory directly. See section ``hardware driven Braille terminals''. The second kind of Braille terminal is similar, in many ways, to a normal terminal screen of the kind Linux supports automatically. Unfortunately, they need special software to make them usable. There are two packages which help with these. The first, BRLTTY, works with several Braille display types and the authors are keen to support more as information becomes available. Currently BRLTTY supports Tieman B.V.'s CombiBraille series, Alva B.V.'s ABT3 series and Telesensory Systems Inc.'s PowerBraille and Navigator series displays. The use of Blazie Engineering's Braille Lite as a Braille display is discouraged, but support may be renewed on demand. See section ``Software Braille Terminals''. The other package I am aware of is Braille Enhanced Screen. This is designed to work on other UNIX systems as well as Linux. This should allow user access to a Braille terminal with many useful features such as the ability to run different programs in different `virtual terminals' at the same time. 3.2.2. Speech Synthesis Speech Synthesisers take (normally) ASCII text and convert it into actual spoken output. It is possible to have these implemented as either hardware or software. Unfortunately, the free Linux speech synthesisers are, reportedly, not good enough to use as a sole means of output. Hardware speech synthesisers are the alternative. The main one that I know of that works is DECtalk from Digital, driven by emacspeak. However, at this time (March 1997) a driver for the Doubletalk synthesiser has been announced. Using emacspeak full access to all of the facilities of Linux is fairly easy. This includes the normal use of the shell, a world wide web browser and many other similar features, such as email. Although, it only acts as a plain text reader (similar to IBM's one for the PC) when controling programs it doesn't understand, with those that it does, it can provide much more sophisticated control. See section ``Emacspeak'' for more information about emacspeak. 3.2.3. Handling Console Output When it starts up, Linux at present puts all of its messages straight to the normal (visual) screen. This could be changed if anyone with a basic level of kernel programming ability wants to do it. This means that it is impossible for most Braille devices to get information about what Linux is doing before the operating system is completely working. It is only at that stage that you can start the program that you need for access. If the BRLTTY program is used and run very early in the boot process, then from this stage on the messages on the screen can be read. Most hardware and software will still have to wait until the system is completely ready. This makes administering a Linux system difficult, but not impossible for a visually impaired person. Once the system is ready however, you can scroll back by pressing (on the default keyboard layout) Shift-PageUP. There is one Braille system that can use the console directly, called the Braillex. This is designed to read directly from the screen memory. Unfortunately the normal scrolling of the terminal gets in the way of this. If you are using a Kernel newer than 1.3.75, just type linux no-scroll at the LILO prompt or configure LILO to do this automatically. If you have an earlier version of Linux, see section ``Screen Memory Braille Terminals'' The other known useful thing to do is to use sounds to say when each stage of the boot process has been reached. (T.V. Raman suggestion) 3.2.4. Optical Character Recognition There is a free Optical Character Recognition (OCR) program for Linux called xocr. In principle, if it is good enough, this program would allow visually impaired people to read normal books to some extent (accuracy of OCR is never high enough..). However, according to the documentation, this program needs training to recognise the particular font that it is going to use and I have no idea how good it is since I don't have the hardware to test it. 3.3. Beginning to Learn Linux Beginning to learn Linux can seem difficult and daunting for someone who is either coming from no computing background or from a pure DOS background. Doing the following things may help: o Learn to use Linux (or UNIX) on someone else's system before setting up your own. o Initially control Linux from your own known speaking/Braille terminal. If you plan to use speech, you may want to learn emacs now. You can learn it as you go along though. See below o If you come from an MS-DOS background, read the DOS2Linux Mini HOWTO for help with converting (see ``The Linux HOWTO Documents''). The Emacspeak HOWTO written by Jim Van Zandt () covers this in much more detail (see ``The Linux HOWTO Documents''). If you are planning to use Emacspeak, you should know that Emacspeak does not attempt to teach Emacs, so in this sense, prior knowledge of Emacs would always be useful. This said, you certainly do not need to know much about Emacs to start using Emacspeak. In fact, once Emacspeak is installed and running, it provides a fluent interface to the rich set of online documentation including the info pages, and makes learning what you need a lot easier. "In summary: starting to use Emacspeak takes little learning. Getting the full mileage out of Emacs and Emacspeak, especially if you intend using it as a replacement for X Windows as I do does involve eventually becoming familiar with a lot of the Emacs extensions; but this is an incremental process and does not need to be done in a day." - T.V.Raman One other option which may be interesting are the RNIB training tapes which include one covering UNIX. These can be got from RNIB Customer Services PO Box 173 Peterborough Cambridgeshire PE2 6WS Tel: 01345 023153 (probably only works in UK) 3.4. Braille Embossing Linux should be the perfect platform to drive a Braille embosser from. There are many formatting tools which are aimed specifically at the fixed width device. A Braille embosser can just be connected to the serial port using the standard Linux printing mechanisms. For more info see the Linux Printing HOWTO. There is a free software package which acts as a multi-lingual grade two translator available for Linux from the American ``National Federation for the Blind''. This is called NFBtrans. See section ``NFB translator'' for more details. 4. Hearing Problems For the most part there is little problem using a computer for people with hearing problems. Almost all of the output is visual. There are some situations where sound output is used though. For these, the problem can sometimes be worked round by using visual output instead. 4.1. Visual Bells By tradition, computers go `beep' when some program sends them a special code. This is generally used to get attention to the program and for little else. Most of the time, it's possible to replace this by making the entire screen (or terminal emulator) flash. How to do this is very variable though. xterm (under X) for xterm, you can either change the setting by pressing the middle mouse button while holding down the control key, or by putting a line with just `XTerm*visualBell: true' (not the quotes of course) in the file .Xdefaults in your home directory. the console (otherwise) The console is slightly more complex. Please see Alessandro Rubini's Visual Bell mini HOWTO for details on this. Available along with all the other Linux documentation (see section ``other Linux documents''). Mostly the configuration has to be done on a per application basis, or by changing the Linux Kernel its self. 5. Physical Problems Many of these problems have to be handled individually. The needs of the individual, the ways that they can generate input and other factors vary so much that all that this HOWTO can provide is a general set of pointers to useful software and expertise. 5.1. Unable to Use a Mouse/Pointer Limited mobility can make it difficult to use a mouse. For some people a tracker ball can be a very good solution, but for others the only possible input device is a keyboard (or even something which simulates a keyboard). For normal use of Linux this shouldn't be a problem (but see the section ``Making the keyboard behave''), but for users of X, this may cause major problems under some circumstances. Fortunately, the fvwm window manager has been designed for use without a pointer and most things can be done using this. I actually do this myself when I lose my mouse (don't ask) or want to just keep typing. fvwm is included with all distributions of Linux that I know of. Actually using other programs will depend on their ability to accept key presses. Many X programs do this for all functions. Many don't. I sticky mouse keys, which are supposedly present in the current release of X should make this easier. 5.1.1. Unable to Use a Keyboard People who are unable to use a keyboard normally can sometimes use one through a headstick or a mouthstick. This calls for special setup of the keyboard. Please see also the section ``Making the keyboard behave''. 5.1.1.1. Other Input Hardware (X Windows System only) For others, the keyboard cannot be used at all and only pointing devices are available. In this case, no solution is available under the standard Linux Console and X will have to be used. If the X-Input extension can be taught to use the device and the correct software for converting pointer input to characters can be found (I haven't seen it yet) then any pointing should be usable without a keyboard. There are a number of devices worth considering for such input such as touch screens and eye pointers. Many of these will need a `device driver' written for them. This is not terribly difficult if the documentation is available, but requires someone with good C programming skills. Please see the Linux Kernel Hackers guide and other kernel reference materials for more information. Once this is set up, it should be possible to use these devices like a normal mouse. 5.1.2. Controlling Physical Hardware From Linux The main group of interest here are the Linux Lab Project. Generally, much GPIB (a standard interface to scientific equipment, also known as the IEEE bus) hardware can be controlled. This potentially gives much potential for very ambitious accessibility projects. As far as I know none have yet been attempted. 5.2. Speech Recognition Speech recognition is a very powerful tool for enabling computer use. There are two recognition systems that I know of for Linux, the first is ears which is described as ``recognition is not optimal. But it is fine for playing and will be improved'', the second is AbbotDemo ``A speaker independent continuous speech recognition system'' which may well be more interesting, though isn't available for commercial use without prior arrangement. See the Linux software map for details (see section ``other Linux documents''). 5.3. Making the Keyboard Behave 5.3.1. X Window System. The latest X server which comes with Linux can include many features which assist in input. This includes such features as StickKeys, MouseKeys, RepeatKeys, BounceKeys, SlowKeys, and TimeOut. These allow customisation of the keyboard to the needs of the user. These are provided as part of the XKB> extension in versions of X after version 6.1. To find out your version and see whether you have the extension installed, you can try. xdpyinfo -queryExtensions 5.3.2. Getting Rid of Auto Repeat To turn off key repeat on the Linux console run this command (I think it has to be run once per console; a good place to run it would be in your login files, .profile or .login in your home directory). setterm -repeat off To get rid of auto repeat on any X server, you can use the command xset -r which you could put into the file which get runs when you start using X (often .xsession or .xinit under some setups) Both of these commands are worth looking at for more ways of changing behaviour of the console. 5.3.3. Macros / Much input, few key presses Often in situations such as this, the biggest problem is speed of input. Here the most important thing to aim for is the most number of commands with the fewest key presses. For users of the shell (bash / tcsh) you should look at the manual page, in particular command and filename completion (press the tab key and bash tries to guess what should come next). For information on macros which provide sequences of commands for just one key press, have a look at the Keystroke HOWTO. 5.3.4. Sticky Keys Sticky keys are a feature that allow someone who can only reliably press one button at a time to use a keyboard with all of the various modifier keys such as shift and control. These keys, instead of having to be held on at the same time as the other key instead become like the caps lock key and stay on while the other key is pressed. They may then either switch off or stay on for the next key depending on what is needed. For information about how to set this up please see the Linux Keyboard HOWTO, especially section `I can use only one finger to type with' (section 15 in the version I have) for more information on this. - Information from Toby Reed. 6. General Programming Issues Many of the issues worth taking into account are the same when writing software which is designed to be helpful for access as when trying to follow good design. 6.1. Try to Make it Easy to Provide Multiple Interfaces If your software is only usable through a graphical interface then it can be very hard to make it usable for someone who can't see. If it's only usable through a line oriented interface, then someone who can't type will have difficulties. Provide keyboard shortcuts as well as the use of the normal X pointer (generally the mouse). You can almost certainly rely on the user being able to generate key presses to your application. 6.2. Make software configurable. If it's easy to change fonts then people will be able to change to one they can read. If the colour scheme can be changed then people who are colour blind will be more likely to be able to use it. If fonts can be changed easily then the visually impaired will find your software more useful. 6.3. Test the Software on Users. If you have a number of people use your software, each with different access problems then they will be more likely to point up specific problems. Obviously, this won't be practical for everybody, but you can always ask for feedback. 6.4. Make Output Distinct Where possible, make it clear what different parts of your program are what. Format error messages in a specific way to identify them. Under X, make sure each pane of your window has a name so that any screen reader software can identify it. 6.5. Licenses Some software for Linux (though none of the key programs) has license like `not for commercial use'. This could be quite bad for a person who starts using the software for their personal work and then possibly begins to be able to do work they otherwise couldn't with it. This could be something which frees them from financial and other dependence on others people. Even if the author of the software is willing to make exceptions, it makes the user vulnerable both to changes of commercial conditions (some company buys up the rights) and to refusal from people they could work for (many companies are overly paranoid about licenses). It is much better to avoid this kind of licensing where possible. Protection from commercial abuse of software can be obtained through more specific licenses like the GNU Public License or Artistic License where needed. 7. Other Information 7.1. Linux Documentation The Linux documentation is critical to the use of Linux and most of the documents mentioned here should be included in recent versions of Linux, from any source I know of. If you want to get the documentation on the Internet, here are some example sites. These should be mirrored at most of the major FTP sites in the world. o ftp.funet.fi (128.214.6.100) : /pub/OS/Linux/doc/ o tsx-11.mit.edu (18.172.1.2) : /pub/linux/docs/ o sunsite.unc.edu (152.2.22.81) : /pub/Linux/docs/ 7.1.1. The Linux Info Sheet A simple and effective explanation of what Linux is. This is one of the things that you should hand over when you want to explain why you want Linux and what it is good for. The Linux Info Sheet is available on the World Wide Web from and other mirrors. 7.1.2. The Linux Meta FAQ A list of other information resources, much more complete than this one. The meta FAQ is available on the World Wide Web from and other mirrors 7.1.3. The Linux Software Map The list of software available for Linux on the Internet. Many of the packages listed here were found through this. The LSM is available in a searchable form from . It is also available in a single text file in all of the FTP sites mentioned in section ``Linux Documentation''. 7.1.4. The Linux HOWTO documents The HOWTO documents are the main documentation of Linux. This Access HOWTO is an example of one. The home site for the Linux Documentation Project which produces this information is . There are also many companies producing these in book form. Contact a local Linux supplier for more details. The Linux HOWTO documents will be in the directory HOWTO in all of the FTP sites mentioned in section ``Linux Documentation''. 7.1.5. The Linux FAQ A list of `Frequently Asked Questions' with answers which should solve many common questions. The FAQ list is available from as well as all of the FTP sites mentioned in section ``Linux Documentation''. 7.2. Mailing Lists There are two lists that I know of covering these issues specifically for Linux. There are also others which it is worth researching which cover computer use more generally. Incidentally, if a mail is sent to these lists I will read it eventually and include any important information in the Access-HOWTO, so you don't need to send me a separate copy unless it's urgent in some way. 7.2.1. The Linux Access List This is a general list covering Linux access issues. It is designed `to service the needs of users and developers of the Linux OS and software who are either disabled or want to help make Linux more accessible'. To subscribe send email to and in the BODY (not the subject) of the email message put: subscribe linux-access 7.2.2. The Linux Blind List This is a mailing list covering Linux use for blind users. There is also a list of important and useful software being gathered in the list's archive. To subscribe send mail to with the subject: help. This list is now moderated. 7.3. WWW References The World Wide Web is, by it's nature, very rapidly changing. If you are reading this document in an old version then some of these are likely to be out of date. The original version that I maintain on the WWW shouldn't go more than a month or two out of date, so refer to that please. Linux Documentation is available from Linux Access On the Web with all of the versions of the HOWTO in . Preferably, however, download from one of the main Linux FTP sites. If I get a vast amount of traffic I'll have to close down these pages and move them elsewhere. The BLINUX Documentation and Development Project . "The purpose of The BLINUX Documentation and Development Project is to serve as a catalyst which will both spur and speed the development of software and documentation which will enable the blind user to run his or her own Linux workstation." Emacspeak WWW page BRLTTY unofficial WWW page Yahoo (one of the most major Internet catalogues) The Linux Lab Project . The BLYNX pages: Lynx Support Files Tailored For Blind and Visually Handicapped Users . 7.4. Suppliers This is a UK supplier for the Braillex. Alphavision Limited 7.5. Manufacturers 7.5.1. Alphavision I think that they are a manufacturer? RNIB only lists them as a supplier, but others say they make the Braillex. Alphavision Ltd Seymour House Copyground Lane High Wycombe Bucks HP12 3HE England U.K. Phone +44 1494-530 555 7.5.1.1. Linux Supported Alphavision AT Products o Braillex 7.5.2. Blazie Engineering The Braille Lite was supported in the original version of BRLTTY. That support has now been discontinued. If you have one and want to use it with Linux then that may be possible by using this version of the software. Blazie Engineering 105 East Jarrettsville Rd. Forest Hill, MD 21050 U.S.A. Phone +1 (410) 893-9333 FAX +1 (410) 836-5040 BBS +1 (410) 893-8944 E-Mail WWW 7.5.2.1. Blazie AT Products o Braille Lite (support discontinued) 7.5.3. Digital Equipment Corporation Digital Equipment Corporation P.O. Box CS2008 Nashua NH 03061-2008 U.S.A Order +1 800-722-9332 Tech info +1 800-722-9332 FAX +1 603-884-5597 WWW 7.5.3.1. Linux Supported DEC AT Products o DECTalk Express 7.5.4. Kommunikations-Technik Stolper GmbH KTS Stolper GmbH Herzenhaldenweg 10 73095 Albershausen Germany Phone +49 7161 37023 Fax +49 7161 32632 7.5.4.1. Linux Supported KTG AT Products o Brailloterm 8. Software Packages References in this section are taken directly from the Linux Software map which can be found in all standard places for Linux documentation and which lists almost all of the software available for Linux. 8.1. Emacspeak Emacspeak is the software side of a speech interface to Linux. Any other character based program, such as a WWW browser, or telnet or another editor can potentially be used within emacspeak. The main difference between it and normal screen reader software for such operating systems as DOS is that it also has a load more extra features. It is based in the emacs text editor. A text editor is generally just a program which allows you to change the contents of a file, for example, adding new information to a letter. Emacs is in fact far beyond a normal text editor, and so this package is much more useful than you might imagine. You can run any other program from within emacs, getting any output it generates to appear in the emacs terminal emulator. The reason that emacs is a better environment for Emacspeak is that it can can understand the layout of the screen and can intelligently interpret the meaning of, for example, a calendar, which would just be a messy array of numbers otherwise. The originator of the package manages to look after his own Linux machine entirely, doing all of the administration from within emacs. He also uses it to control a wide variety of other machines and software directly from that machine. Emacspeak is included within the Debian Linux distribution and is included as contributed software within the Slakware distribution. This means that it is available on many of the CDROM distributions of Linux. By the time this is published, the version included should be 5 or better, but at present I only have version 4 available for examination. Begin3 Title: emacspeak - a speech output interface to Emacs Version: 4.0 Entered-date: 30MAY96 Description: Emacspeak is the first full-fledged speech output system that will allow someone who cannot see to work directly on a UNIX system. (Until now, the only option available to visually impaired users has been to use a talking PC as a terminal.) Emacspeak is built on top of Emacs. Once you start emacs with emacspeak loaded, you get spoken feedback for everything you do. Your mileage will vary depending on how well you can use Emacs. There is nothing that you cannot do inside Emacs:-) Keywords: handicap access visually impaired blind speech emacs Author: raman@adobe.com (T. V. Raman) Maintained-by: jrv@vanzandt.mv.com (Jim Van Zandt) Primary-site: sunsite.unc.edu apps/sound/speech 124kB emacspeak-4.0.tgz Alternate-site: Original-site: http://www.cs.cornell.edu /pub/raman/emacspeak 123kB emacspeak.tar.gz/Info/People/raman/emacspeak/emacspeak.tar.gz Platforms: DECtalk Express or DEC Multivoice speech synthesizer, GNU FSF Emacs 19 (version 19.23 or later) and TCLX 7.3B (Extended TCL). Copying-policy: GPL End 8.2. BRLTTY This is a program for running a serial port Braille terminal. It has been widely tested and used, and supports a number of different kinds of hardware (see the Linux Software Map entry below). The maintainer is, Nikhil Nair . The other people working on it are Nicolas Pitre and Stephane Doyon . Send any comments to all of them. The authors seem keen to get support in for more different devices, so if you have one you should consider contacting them. They will almost certainly need programming information for the device, so if you can contact your manufacturer and get that they are much more likely to be able to help you. A brief feature list (from their README file) to get you interested o Full implementation of the standard screen review facilities. o A wide range of additional optional features, including blinking cursor and capital letters, screen freezing for leisurely review, attribute display to locate highlighted text, hypertext links, etc. o `Intelligent' cursor routing. This allows easy movement of the cursor in text editors etc. without moving the hands from the Braille display. o A cut & paste function. This is particularly useful for copying long filenames, complicated commands etc. o An on-line help facility. o Support for multiple Braille codes. o Modular design allows relatively easy addition of drivers for other Braille displays, or even (hopefully) porting to other Unix-like platforms. Begin3 Title: BRLTTY - Access software for Unix for a blind person using a soft Braille terminal Version: 1.0.2, 17SEP96 Entered-date: 17SEP96 Description: BRLTTY is a daemon which provides access to a Unix console for a blind person using a soft Braille display (see the README file for a full explanation). BRLTTY only works with text-mode applications. We hope that this system will be expanded to support other soft Braille displays, and possibly even other Unix-like platforms. Keywords: Braille console access visually impaired blind Author: nn201@cus.cam.ac.uk (Nikhil Nair) nico@cam.org (Nicolas Pitre) doyons@jsp.umontreal.ca (Stephane Doyon) jrbowden@bcs.org.uk (James Bowden) Maintained-by: nn201@cus.cam.ac.uk (Nikhil Nair) Primary-site: sunsite.unc.edu /pub/Linux/system/Access 110kb brltty-1.0.2.tar.gz (includes the README file) 6kb brltty-1.0.2.README 1kb brltty-1.0.2.lsm Platforms: Linux (kernel 1.1.92 or later) running on a PC or DEC Alpha. Not X/graphics. Supported Braille displays (serial communication only): - Tieman B.V.: CombiBraille 25/45/85; - Alva B.V.: ABT3xx series; - Telesensory Systems Inc.: PowerBraille 40 (not 65/80), Navigator 20/40/80 (latest firmware version only?). Copying-policy: GPL End 8.3. Screen Screen is a standard piece of software to allow many different programs to run at the same time on one terminal. It has been enhanced to support some Braille terminals (those from Telesensory) directly. 8.4. Rsynth This is a speech synthesiser listed in the Linux Software Map. It doesn't apparently work well enough for use by a visually impaired person. Use hardware instead, or improve it.. a free speech synthesiser would be really really useful. 8.5. xocr xocr is a package which implements optical character recognition for Linux. As with Rsynth, I don't think that this will be acceptable as a package for use as a sole means of input by a visually impaired person. I suspect that the algorithm used means that it will need to be watched over by someone who can check that it is reading correctly. I would love to be proved wrong. 8.6. xzoom xzoom is a screen magnifier, in the same vein as xmag, but sufficiently better to be very useful to a visually impaired person. The main disadvantages of xzoom are that it can't magnify under itself, that some of the key controls aren't compatible with fvwm, the normal Linux window manager and that it's default configuration doesn't run over a network (this can be fixed at some expense to speed). Apart from that though, it's excellent. It does continuous magnification which allows you to, for example, scroll a document up and down, whilst keeping the section you are reading magnified. Alternatively, you can move a little box around the screen, magnifying the contents and letting you search for the area you want to see. xzoom is also available as an rpm from the normal RedHat sites, making it very easy to install for people using the rpm system (such as Redhat users). Begin3 Title: xzoom Version: 0.1 Entered-date: Mar 30 1996 Description: xzoom can magnify (by integer value) rotate (by a multiple if 90 degrees) and mirror about the X or Y axes areas on X11 screen and display them in it's window. Keywords: X11 zoom magnify xmag Author: Itai Nahshon Maintained-by: Itai Nahshon Primary-site: sunsite.unc.edu probably in /pub/Linux/X11/xutils/xzoom-0.1.tgz Platforms: Linux+11. Support only for 8-bit depth. Tested only in Linux 1.3.* with the XSVGA 3.1.2 driver. Needs the XSHM extension. Copying-policy: Free End 8.7. NFBtrans nfbtrans is a multi-grade Braille translation program distributed by the National Federation for the Blind in the U.S.A. It is released for free in the hope that someone will improve it. Languages covered are USA English, UK English, Spanish, Russian, Esperanto, German, Biblical Hebrew and Biblical Greek, though others could be added just by writing a translation table. Also covered are some computer and math forms. I have managed to get it to compile under Linux, though, not having a Braille embosser available at the present moment I have not been able to test it. NFBtrans is available from . After downloading it, you will have to compile it. 8.7.1. Compiling NFBtrans on Linux I have returned this patch to the maintainer of NFBtrans and he says that he has included it, so if you get a version later than 740, you probably won't have to do anything special. Just follow the instructions included in the package. unzip -L NFBTR740.ZIP #or whatever filename you have mv makefile Makefile Next save the following to a file (e.g. patch-file) *** nfbpatch.c.orig Tue Mar 12 11:37:28 1996 --- nfbpatch.c Tue Mar 12 11:37:06 1996 *************** *** 185,190 **** --- 185,193 ---- return (finfo.st_size); } /* filelength */ + #ifndef linux + /* pretty safe to assume all linux has usleep I think ?? this should be + done properly anyway */ #ifdef SYSVR4 void usleep(usec) int usec; *************** *** 195,200 **** --- 198,204 ---- UKP } /* usleep */ #endif + #endif void beep(count) int count; and run patch < patch-file then type make and the program should compile. 8.8. UnWindows UnWindows is a package of access utilities for X which provides many useful facilities for the visually impaired (not blind). It includes a screen magnifier and other customised utilities to help locate the pointer. UnWindows can be downloaded from . As it comes by default, the package will not work on Linux because it relies on special features of Suns. However, some of the utilities do work and I have managed to port most of the rest so this package may be interesting to some people. My port will either be incorporated back into the original or will be available in the BLINUX archives (see ``WWW references''). The remaining utility which doesn't yet work is the configuration utility. In my version the programs, instead of generating sounds themselves, just call another program. The other program could for example be play /usr/lib/games/xboing/sounds/ouch.au Which would make the xboing ouch noise, for example it could do this as the pointer hit the left edge of the screen. 8.8.1. dynamag dynamag is a screen magnification program. please see the section on Screen magnification (``magnification''). This program worked in the default distribution. 8.8.2. coloreyes coloreyes makes it easy to find the pointer (mouse) location. It consists of a pair of eyes which always look in the direction of the pointer (like xeyes) and change color depending on how far away the mouse is (unlike xeyes). This doesn't work in the default distribution, but the test version, at the same location, seems to work. 8.8.3. border border is a program which detects when the pointer (mouse) has moved to the edge of the screen and makes a sound according to which edge of the screen has been approached. The version which is available uses a SUN specific sound system. I have now changed this so that instead of that, it just runs a command, which could be any Linux sound program. 8.8.4. un-twm The window manager is a special program which controls the location of all of the other windows (programs) displayed on the X screen. un-twm is a special version which will make a sound as the pointer enters different windows. The sound will depend on what window has been entered. The distributed version doesn't work on linux because, like border it relies on SUN audio facilities. Again I already have a special version which will be avaliable by the time you read this. 9. Hardware 9.1. Braille terminals driven from Screen Memory These are Braille terminals that can read the screen memory directly in a normal text mode. It is possible to use it to work with Linux for almost all of the things that a seeing user can do on the console, including installation. However, it has a problem with the scrolling of the normal Linux kernel, so a kernel patch needs to be applied. See ``Patching the Kernel for Braillex and Brailloterm''. 9.1.1. Braillex The Braillex is a terminal which is designed to read directly from the Screen memory, thus getting round any problems with MS-DOS programs which don't behave strangely. If you could see it on screen, then this terminal should be able to display it in Braille. In Linux, unfortunately, screen handling is done differently from MS-DOS, so this has to be changed somewhat. To get this terminal to work, you have to apply the patch given below in section ``Patching the Kernel''. Once this is done, the Braillex becomes one of the most convenient ways to use Linux as it allows all of the information normally available to a seeing person to be read. Other terminals don't start working until the operating system has completely booted. The Braillex is available with two arrangements of Braille cells (80x1 or 40x2) and there is a model, called the IB 2-D which also has a vertical bar to show information about all of the lines of the screen (using 4 programmable dots per screen line) Price: 8,995 (pounds sterling) or 11495 UKP for 2-D Manufacturer: Alphavision Limited (UK) Suppliers: ???? 9.1.2. Brailloterm ``What is Brailloterm? It's a refreshable display Braille, made by KTS Kommunikations-Technik Stolper GmbH. It has 80 Braille cells in an unique line. Each cell has 8 dots that are combined (up/down) to represent a character. By default, Brailloterm shows me the line in which the screen cursor is. I can use some functions in Brailloterm to see any line in the screen.'' - Jose Vilmar Estacio de Souza Jose then goes on to say that the terminal can also use the serial port under DOS but that it needs a special program. I don't know if any of the ones for Linux would work. As with Braillex, this needs a special patch to the kernel work properly. See section ``Patching the Kernel''. Price: about 23.000,- DM / $ 15.000, Manufacturer: Kommunikations-Technik Stolper GmbH Suppliers: ???? 9.1.3. Patching the Kernel for Braillex and Brailloterm This probably also applies to any other terminals which read directly from screen memory to work under MS-DOS. Mail me to confirm any terminals that you find work. This does not apply and will actually lose some features for terminals driven using the BRLTTY software. I am told this patch applies to all Kernels version 1.2.X. It should also work on all Kernel versions from 1.1.X to 1.3.72, with just a warning from patch (I've tested that the patch applies to 1.3.68 at least). From 1.3.75 the patch is no longer needed because the Kernel can be configured not to scroll using `linux no-scroll' at the LILO prompt. See the Boot Prompt HOWTO for more details. *** drivers/char/console.c~ Fri Mar 17 07:31:40 1995 --- drivers/char/console.c Tue Mar 5 04:34:47 1996 *************** *** 601,605 **** static void scrup(int currcons, unsigned int t, unsigned int b) { ! int hardscroll = 1; if (b > video_num_lines || t >= b) --- 601,605 ---- static void scrup(int currcons, unsigned int t, unsigned int b) { ! int hardscroll = 0; if (b > video_num_lines || t >= b) To apply it: 1. Save the above text to a file (say patch-file) 2. change to the drivers/char directory of your kernel sources 3. run patch < patch-file 4. Compile your kernel as normal Apply those patches and you should be able to use the Braille terminal as normal to read the Linux Console. Put in words, the patch just means `change the 1 to a 0 in the first line of the function scrup which should be near line 603 in the file drivers/char/console.c'. The main thing about patch is that program understands this, and that it knows how to guess what to do when the Linux developers change things in that file. If you want to use a more modern kernel with completely disabled scrolling, (instead of the boot prompt solution I already mentioned), please use the following patch. This does not apply to kernels earlier than 1.3.75. *** console.c~ Fri Mar 15 04:01:45 1996 --- console.c Thu Apr 4 13:29:48 1996 *************** *** 516,520 **** unsigned char has_wrapped; /* all of videomem is data of fg_console */ static unsigned char hardscroll_enabled; ! static unsigned char hardscroll_disabled_by_init = 0; void no_scroll(char *str, int *ints) --- 516,520 ---- unsigned char has_wrapped; /* all of videomem is data of fg_console */ static unsigned char hardscroll_enabled; ! static unsigned char hardscroll_disabled_by_init = 1; void no_scroll(char *str, int *ints) 9.2. Software Driven Braille Terminals The principle of operation of these terminal is very close to that of a CRT terminal such as the vt100. They connect to the serial port and the computer has to run a program which sends them output. At present there are two known programs for Linux. BRLTTY, see section ``BRLTTY'') and Braille enhanced screen. 9.2.1. Tieman B.V. 9.2.1.1. CombiBraille This Braille terminal is supported by the BRLTTY software. It comes in three versions with 25, 45 or 85 Braille cells. The extra five cells over a standard display are used for status information. Price: around 4600 UKP for the 45 cell model ... Manufacturer: Tieman B.V. Suppliers: Concept Systems, Nottingham, England (voice +44 115 925 5988) 9.2.2. Alva B.V. The ABT3xx series is supported in BRLTTY. Only the ABT340 has been confirmed to work at this time. Please pass back information to the BRLTTY authors on other models. Price: 20 cell - 2200 UKP; 40 cell 4500 UKP; 80 cell 8000 UKP Manufacturer: Alva Suppliers: Professional Vision Services LTD, Hertshire, England (+44 1462 677331) 9.2.3. Telesensory Systems Inc. displays Because they have provided programming information to the developers, the Telesensory displays are supported both by BRLTTY and screen. 9.2.3.1. Powerbraille There are three models the 40, the 65 and the 80. Only the 40 is known to be supported by BRLTTY. Price: 20 cell - 2200 UKP; 40 cell 4500 UKP; 80 cell 8000 UKP Manufacturer: Alva Suppliers: Professional Vision Services LTD, Hertshire, England (+44 1462 677331) 9.2.3.2. Navigator Again there are three models the 20, the 60 and the 80. Recent versions are all known to work with BRLTTY but whether earlier ones (with earlier firmware) also work has not been confirmed. Price: 80 cell 7800 UKP Manufacturer: Alva Suppliers: Professional Vision Services LTD, Hertshire, England (+44 1462 677331) 9.2.4. Braille Lite This is more a portable computer than a terminal. It could, however, be used with BRLTTY version 0.22 (but not newer versions) as if it was a normal Braille terminal. Unfortunately, many of the features available with the CombiBraille cannot be used with the Braille Lite. This means that it should be avoided for Linux use where possible. Price: $3,395.00 Manufacturer: Blazie Engineering 9.3. Speech Synthesisers Speech synthesisers normally connect to the serial port of a PC. Useful features include o Braille labels on parts o Many voices to allow different parts of document to be spoken differently o Use with headphones (not available on all models) The critical problem is that the quality of the speech. This is much more important to someone who is using the speech synthesiser as their main source of information than to someone who is just getting neat sounds out of a game. For this reason T.V. Raman seems to only recommend the DECTalk. Acceptable alternatives would be good. 9.3.1. DECTalk Express This is a hardware speech synthesiser. It is recommended for use with Emacspeak and in fact the DECTalk range are the only speech synthesisers which work with that package at present. This synthesiser has every useful feature that I know about. The only disadvantage that I know of at present is price. Price: $1195.00 Manufacturer: Digital Equipment Corporation Suppliers: Many. I'd like details of those with Specific Linux support / delivering international or otherwise of note only please. Otherwise refer to local organisations. Digital themselves or the Emacspeak WWW pages. 9.3.2. Accent SA This is a synthesiser made by Aicom Corporation. An effort has begun to write a driver for it however help is needed. Please see if you think you can help. 9.3.3. SPO256-AL2 Speak and Spell chip. Some interest has been expressed in using this chip in self built talking circuits. I'd be interested to know if anyone has found this useful. A software package speak-0.2pl1.tar.gz was produced by David Sugar . My suspicion, though, is that the quality of the output wouldn't be good enough for regular use. 10. Acknowledgements Much of this document was created from various information sources on the Internet, many found from Yahoo and DEC's Alta Vista Search engine. Included in this was the documentation of most of the software packages mentioned in the text. Some information was also gleaned from the Royal National Institute for the Blind's helpsheets. T.V. Raman, the author of Emacspeak has reliably contributed comments, information and text as well as putting me in touch with other people who he knew on the Internet. Kenneth Albanowski provided the patch needed for the Brailloterm and information about it. Roland Dyroff of S.u.S.E. GmbH (Linux distributors and makers of S.u.S.E. Linux (English/German)) looked up KTS Stolper GmbH at my request and got some hardware details and information on the Brailloterm. The most major and careful checks over of this document were done by James Bowden, and Nikhil Nair , the BRLTTY authors who suggested a large number of corrections as well as extra information for some topics. The contributors to the blinux and linux-access mailing lists have contributed to this document by providng information for me to read. Mark E. Novak of the Trace R&D centre pointed me in the direction of several packages of software and information which I had not seen before. He also made some comments on the structure of the document which I have partially taken into account and should probably do more about. Other contributors include Nicolas Pitrie and Stephane Doyon. A number of other people have contributed comments and information. Specific contributions are acknowledged within the document. This version was specifically produced for RedHat's Dr. Linux book. This is because they provided warning of it's impending release to myself and other LDP authors. Their doing this is strongly appreciated since wrong or old information sits around much longer in a book than on the Internet. No doubt you made a contribution and I haven't mentioned it. Don't worry, it was an accident. I'm sorry. Just tell me and I will add you to the next version. Linux 2.4 Advanced Routing HOWTO Netherlabs BV (bert hubert ) Gregory Maxwell Remco van Mook Martijn van Oosterhout Paul B Schroeder howto@ds9a.nl v0.1.0 $Date: 2000/05/26 15:42:43 $ A very hands-on approach to iproute2, traffic shaping and a bit of netfilter ______________________________________________________________________ Table of Contents 1. Dedication 2. Introduction 2.1 Disclaimer & License 2.2 Prior knowledge 2.3 What Linux can do for you 2.4 Housekeeping notes 2.5 Access, CVS & submitting updates 2.6 Layout of this document 3. Introduction to iproute2 3.1 Why iproute2? 3.2 Iproute2 tour 3.3 Prerequisites 3.4 Exploring your current configuration 3.4.1 (TT 3.4.2 (TT 3.4.3 (TT 3.5 ARP 4. Rules - routing policy database 4.1 Simple source routing 5. GRE and other tunnels 5.1 A few general remarks about tunnels: 5.2 IP in IP tunneling 5.3 GRE tunneling 5.3.1 IPv4 Tunneling 5.3.2 IPv6 Tunneling 5.4 Userland tunnels 6. IPsec: secure IP over the internet 7. Multicast routing 8. Using Class Based Queueing for bandwidth management 8.1 What is queueing? 8.2 First attempt at bandwidth division 8.3 What to do with excess bandwidth 8.4 Class subdivisions 8.5 Loadsharing over multiple interfaces 9. More queueing disciplines 9.1 pfifo_fast 9.2 Stochastic Fairness Queueing 9.3 Token Bucket Filter 9.4 Random Early Detect 9.5 Ingress policer qdisc 10. Netfilter & iproute - marking packets 11. More classifiers 11.1 The "fw" classifier 11.2 The "u32" classifier 11.2.1 U32 selector 11.2.2 General selectors 11.2.3 Specific selectors 11.3 The "route" classifier 11.4 The "rsvp" classifier 11.5 The "tcindex" classifier 12. Kernel network parameters 12.1 Reverse Path Filtering 12.2 Obscure settings 12.2.1 Generic ipv4 12.2.2 Per device settings 12.2.3 Neighbor pollicy 12.2.4 Routing settings 13. Backbone applications of traffic control 13.1 Router queues 14. Shaping Cookbook 14.1 Running multiple sites with different SLAs 14.2 Protecting your host from SYN floods 14.3 Ratelimit ICMP to prevent dDoS 14.4 Prioritising interactive traffic 15. Advanced Linux Routing 15.1 How does packet queueing really work? 15.2 Advanced uses of the packet queueing system 15.3 Other packet shaping systems 16. Dynamic routing - OSPF and BGP 17. Further reading 18. Acknowledgements ______________________________________________________________________ 1. Dedication This document is dedicated to lots of people, and is my attempt to do something back. To list but a few: · Rusty Russel · Alexey N. Kuznetsov · The good folks from Google · The staff of Casema Internet 2. Introduction Welcome, gentle reader. This document hopes to enlighten you on how to do more with Linux 2.2/2.4 routing. Unbeknownst to most users, you already run tools which allow you to do spectacular things. Commands like 'route' and 'ifconfig' are actually very thin wrappers for the very powerful iproute2 infrastructure I hope that this HOWTO will become as readable as the ones by Rusty Russel of (amongst other things) netfilter fame. You can always reach us by writing the HOWTO team . 2.1. Disclaimer & License This document is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. In short, if your STM-64 backbone breaks down and distributes pornography to your most esteemed customers - it's never our fault. Sorry. Copyright (c) 2000 by bert hubert, Gregory Maxwell and Martijn van Oosterhout Please freely copy and distribute (sell or give away) this document in any format. It's requested that corrections and/or comments be fowarded to the document maintainer. You may create a derivative work and distribute it provided that you: 1. Send your derivative work (in the most suitable format such as sgml) to the LDP (Linux Documentation Project) or the like for posting on the Internet. If not the LDP, then let the LDP know where it is available. 2. License the derivative work with this same license or use GPL. Include a copyright notice and at least a pointer to the license used. 3. Give due credit to previous authors and major contributors. If you're considering making a derived work other than a translation, it's requested that you discuss your plans with the current maintainer. It is also requested that if you publish this HOWTO in hardcopy that you send the authors some samples for 'review purposes' :-) 2.2. Prior knowledge As the title implies, this is the 'Advanced' HOWTO. While by no means rocket science, some prior knowledge is assumed. This document is meant as a sequel to the Linux 2.4 Networking HOWTO by the same authors. You should probably read that first. Here are some orther references which might help learn you more: Rusty Russels networking-concepts-HOWTO Very nice introduction, explaining what a network is, and how it is connected to other networks Linux Networking-HOWTO (Previously the Net-3 HOWTO) Great stuff, although very verbose. It learns you a lot of stuff that's already configured if you are able to connect to the internet. Should be located in /usr/doc/HOWTO/NET3-4-HOWTO.txt but can be also be found online 2.3. What Linux can do for you A small list of things that are possible: · Throttle bandwidth for certain computers · Throttle bandwidth to certain computers · Help you to fairly share your bandwidth · Protect your network from DoS attacks · Protect the internet from your customers · Multiplex several servers as one, for load balancing or enhanced availability · Restrict access to your computers · Limit access of your users to other hosts · Do routing based on user id (yes!), MAC address, source IP address, port, type of service, time of day or content Currently not many people are using these advanced features. This has several reasons. While the provided documentation is verbose, it is not very hands on. Traffic control is almost undocumented. 2.4. Housekeeping notes There are several things which should be noted about this document. While I wrote most of it, I really don't want it to stay that way. I am a strong believer in Open Source, so I encourage you to send feedback, updates, patches etcetera. Do not hesitate to inform me of typos or plain old errors. If my English sounds somewhat wooden, please realise that I'm not a native speaker. Feel free to send suggestions. If you feel to you are better qualified to maintain a section, or think that you can author and maintain new sections, you are welcome to do so. The SGML of this HOWTO is available via CVS, I very much envision more people working on it. In aid of this, you will find lots of FIXME notices. Patches are always welcome! Wherever you find a FIXME, you should know that you are treading unknown territory. This is not to say that there are no errors elsewhere, but be extra careful. If you have validated something, please let us know so we can remove the FIXME notice. About this HOWTO, I will take some liberties along the road. For example, I postulate a 10Mbit internet connection, while I know full well that those are not very common. 2.5. Access, CVS & submitting updates The canonical location for the HOWTO is here . We now have anonymous CVS access available for the world at large. This is good in several ways. You can easily upgrade to newer versions of this HOWTO and submitting patches is no work at all. Furthermore, it allows the authors to work on the source independently, which is good too. $ export CVSROOT=:pserver:anon@outpost.ds9a.nl:/var/cvsroot $ cvs login CVS password: [enter 'cvs' (without 's)] $ cvs co 2.4routing cvs server: Updating 2.4routing U 2.4routing/2.4routing.sgml If you spot an error, or want to add something, just fix it locally, and run cvs diff -u, and send the result off to us. A Makefile is supplied which should help you create postscript, dvi, pdf, html and plain text. You may need to install sgml-tools, ghostscript and tetex to get all formats. 2.6. Layout of this document We will be doing interesting stuff almost immediately, which also means that there will initially be parts that are explained incompletely or are not perfect. Please gloss over these parts and assume that all will become clear. Routing and filtering are two distinct things. Filtering is documented very well by Rusty's HOWTOs, available here: · Rusty's Remarkably Unreliable Guides We will be focusing mostly on what is possible by combining netfilter and iproute2. 3. Introduction to iproute2 3.1. Why iproute2? Most Linux distributions, and most UNIX's, currently use the venerable 'arp', 'ifconfig' and 'route' commands. While these tools work, they show some unexpected behaviour under Linux 2.2 and up. For example, GRE tunnels are an integral part of routing these days, but require completely different tools. With iproute2, tunnels are an integral part of the tool set The 2.2 and above Linux kernels include a completely redesigned network subsystem. This new networking code brings Linux performance and a feature set with little competition in the general OS arena. In fact, the new routing filtering, and classifying code is more featureful then that provided by many dedicated routers and firewalls and traffic shaping products. As new networking concepts have been invented, people have found ways to plaster them on top of the existing framework in existing OSes. This constant layering of cruft has lead to networking code that is filled with strange behaviour, much like most human languages. In the past, Linux emulated SunOS's handling of many of these things, which was not ideal. This new framework has made it possible to clearly express features previously not possible. 3.2. Iproute2 tour Linux has a sophisticated system for bandwidth provisioning called Traffic Control. This system supports various method for classifying, prioritising, sharing, and limiting both inbound and outbound traffic. We'll start off with a tiny tour of iproute2 possibilities. 3.3. Prerequisites You should make sure that you have the userland tools installed. This package is called 'iproute' on both RedHat and Debian, and may otherwise be found at ftp://ftp.inr.ac.ru/ip- routing/iproute2-2.2.4-now-ss??????.tar.gz". Some parts of iproute require you to have certain kernel options enabled. FIXME: We should mention is always the latest 3.4. Exploring your current configuration This may come as a surprise, but iproute2 is already configured! The current commands ifconfig and route are already using the advanced syscalls, but mostly with very default (ie, boring) settings. The ip tool is central, and we'll ask it do display our interfaces for us. 3.4.1. ip shows us our links [ahu@home ahu]$ ip link list 1: lo: mtu 3924 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: dummy: mtu 1500 qdisc noop link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 3: eth0: mtu 1400 qdisc pfifo_fast qlen 100 link/ether 48:54:e8:2a:47:16 brd ff:ff:ff:ff:ff:ff 4: eth1: mtu 1500 qdisc pfifo_fast qlen 100 link/ether 00:e0:4c:39:24:78 brd ff:ff:ff:ff:ff:ff 3764: ppp0: mtu 1492 qdisc pfifo_fast qlen 10 link/ppp Your mileage may vary, but this is what it shows on my NAT router at home. I'll only explain part of the output as not everything is directly relevant. We first see the loopback interface. While your computer may function somewhat without one, I'd advise against it. The mtu size (maximum transfer unit) is 3924 octects, and it is not supposed to queue. Which makes sense because the loopback interface is a figment of your kernels imagination. I'll skip the dummy interface for now, and it may not be present on your computer. Then there are my two network interfaces, one at the side of my cable modem, the other serves my home ethernet segment. Furthermore, we see a ppp0 interface. Note the absence of IP addresses. Iproute disconnects the concept of 'links' and 'IP addresses'. With IP aliasing, the concept of 'the' IP address had become quite irrelevant anyhow. It does show us the MAC addresses though, the hardware identifier of our ethernet interfaces. 3.4.2. ip shows us our IP addresses [ahu@home ahu]$ ip address show 1: lo: mtu 3924 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 brd 127.255.255.255 scope host lo 2: dummy: mtu 1500 qdisc noop link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 3: eth0: mtu 1400 qdisc pfifo_fast qlen 100 link/ether 48:54:e8:2a:47:16 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/8 brd 10.255.255.255 scope global eth0 4: eth1: mtu 1500 qdisc pfifo_fast qlen 100 link/ether 00:e0:4c:39:24:78 brd ff:ff:ff:ff:ff:ff 3764: ppp0: mtu 1492 qdisc pfifo_fast qlen 10 link/ppp inet 212.64.94.251 peer 212.64.94.1/32 scope global ppp0 This contains more information. It shows all our addresses, and to which cards they belong. 'inet' stands for Internet. There are lots of other address families, but these don't concern us right now. Lets examine eth0 somewhat closer. It says that it is related to the inet address '10.0.0.1/8'. What does this mean? The /8 stands for the number of bits that are in the Network Address. There are 32 bits, so we have 24 bits left that are part of our network. The first 8 bits of 10.0.0.1 correspond to 10.0.0.0, our Network Address, and our netmask is 255.0.0.0. The other bits are connected to this interface, so 10.250.3.13 is directly available on eth0, as is 10.0.0.1 for example. With ppp0, the same concept goes, though the numbers are different. It's address is 212.64.94.251, without a subnet mask. This means that we have a point-to-point connection and that every address, with the exception of 212.64.94.251, is remote. There is more information however, it tells us that on the other side of the link is yet again only one address, 212.64.94.1. The /32 tells us that there are no 'network bits'. It is absolutely vital that you grasp these concepts. Refer to the documentation mentioned at the beginning of this HOWTO if you have trouble. You may also note 'qdisc', which stands for Queueing Discipline. This will become vital later on. 3.4.3. ip shows us our routes Well, we now know how to find 10.x.y.z addresses, and we are able to reach 212.64.94.1. This is not enough however, so we need instructions on how to reach the world. The internet is available via our ppp connection, and it appears that 212.64.94.1 is willing to spread our packets around the world, and deliver results back to us. [ahu@home ahu]$ ip route show 212.64.94.1 dev ppp0 proto kernel scope link src 212.64.94.251 10.0.0.0/8 dev eth0 proto kernel scope link src 10.0.0.1 127.0.0.0/8 dev lo scope link default via 212.64.94.1 dev ppp0 This is pretty much self explanatory. The first 4 lines of output explicitly state what was already implied by ip address show, the last line tells us that the rest of the world can be found via 212.64.94.1, our default gateway. We can see that it is a gateway because of the word via, which tells us that we need to send packets to 212.64.94.1, and that it will take care of things. For reference, this is what the old 'route' utility shows us: [ahu@home ahu]$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 212.64.94.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 212.64.94.1 0.0.0.0 UG 0 0 0 ppp0 3.5. ARP ARP is the Address Resolution Protocol as described in RFC 826 . ARP is used by a networked machine to resolve the hardware location/address of another machine on the same local network. Machines on the Internet are generally known by their names which resolve to IP addresses. This is how a machine on the foo.com network is able to communicate with another machine which is on the bar.net network. An IP address, though, cannot tell you the physical location of a machine. This is where ARP comes into the picture. Let's take a very simple example. Suppose I have a network composed of several machines. Two of the machines which are currently on my network are foo with an IP address of 10.0.0.1 and bar with an IP address of 10.0.0.2. Now foo wants to ping bar to see that he is alive, but alas, foo has no idea where bar is. So when foo decides to ping bar he will need to send out an ARP request. This ARP request is akin to foo shouting out on the network "Bar (10.0.0.2)! Where are you?" As a result of this every machine on the network will hear foo shouting, but only bar (10.0.0.2) will respond. Bar will then send an ARP reply directly back to foo which is akin bar saying, "Foo (10.0.0.1) I am here at 00:60:94:E9:08:12." After this simple transaction used to locate his friend on the network foo is able to communicate with bar until he (his arp cache) forgets where bar is. Now let's see how this works. You can view your machines current arp/neighbor cache/table like so: [root@espa041 /home/src/iputils]# ip neigh show 9.3.76.42 dev eth0 lladdr 00:60:08:3f:e9:f9 nud reachable 9.3.76.1 dev eth0 lladdr 00:06:29:21:73:c8 nud reachable As you can see my machine espa041 (9.3.76.41) knows where to find espa042 (9.3.76.42) and espagate (9.3.76.1). Now let's add another machine to the arp cache. [root@espa041 /home/paulsch/.gnome-desktop]# ping -c 1 espa043 PING espa043.austin.ibm.com (9.3.76.43) from 9.3.76.41 : 56(84) bytes of data. 64 bytes from 9.3.76.43: icmp_seq=0 ttl=255 time=0.9 ms --- espa043.austin.ibm.com ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 0.9/0.9/0.9 ms [root@espa041 /home/src/iputils]# ip neigh show 9.3.76.43 dev eth0 lladdr 00:06:29:21:80:20 nud reachable 9.3.76.42 dev eth0 lladdr 00:60:08:3f:e9:f9 nud reachable 9.3.76.1 dev eth0 lladdr 00:06:29:21:73:c8 nud reachable As a result of espa041 trying to contact espa043, espa043's hardware address/location has now been added to the arp/nieghbor cache. So until the entry for espa043 times out (as a result of no communication between the two) espa041 knows where to find espa043 and has no need to send an ARP request. Now let's delete espa043 from our arp cache: [root@espa041 /home/src/iputils]# ip neigh delete 9.3.76.43 dev eth0 [root@espa041 /home/src/iputils]# ip neigh show 9.3.76.43 dev eth0 nud failed 9.3.76.42 dev eth0 lladdr 00:60:08:3f:e9:f9 nud reachable 9.3.76.1 dev eth0 lladdr 00:06:29:21:73:c8 nud stale Now espa041 has again forgotten where to find espa043 and will need to send another ARP request the next time he needs to communicate with espa043. You can also see from the above output that espagate (9.3.76.1) has been changed to the "stale" state. This means that the location shown is still valid, but it will have to be confirmed at the first transaction to that machine. 4. Rules - routing policy database If you have a large router, you may well cater for the needs of different people, who should be served differently. The routing policy database allows you to do this by having multiple sets of routing tables. If you want to use this feature, make sure that your kernel is compiled with the "IP: policy routing" feature. When the kernel needs to make a routing decision, it finds out which table needs to be consulted. By default, there are three tables. The old 'route' tool modifies the main and local tables, as does the ip tool (by default). The default rules: [ahu@home ahu]$ ip rule list 0: from all lookup local 32766: from all lookup main 32767: from all lookup default This lists the priority of all rules. We see that all rules apply to all packets ('from all'). We've seen the 'main' table before, it's output by ip route ls, but the 'local' and 'default' table are new. If we want to do fancy things, we generate rules which point to different tables which allow us to override system wide routing rules. For the exact semantics on what the kernel does when there are more matching rules, see Alexey's ip-cfref documentation. 4.1. Simple source routing Let's take a real example once again, I have 2 (actually 3, about time I returned them) cable modems, connected to a Linux NAT ('masquerading') router. People living here pay me to use the internet. Suppose one of my house mates only visits hotmail and wants to pay less. This is fine with me, but you'll end up using the low-end cable modem. The 'fast' cable modem is known as 212.64.94.251 and is an PPP link to 212.64.94.1. The 'slow' cable modem is known by various ip addresses, 212.64.78.148 in this example and is a link to 195.96.98.253. The local table: [ahu@home ahu]$ ip route list table local broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 local 10.0.0.1 dev eth0 proto kernel scope host src 10.0.0.1 broadcast 10.0.0.0 dev eth0 proto kernel scope link src 10.0.0.1 local 212.64.94.251 dev ppp0 proto kernel scope host src 212.64.94.251 broadcast 10.255.255.255 dev eth0 proto kernel scope link src 10.0.0.1 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 212.64.78.148 dev ppp2 proto kernel scope host src 212.64.78.148 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 Lots of obvious things, but things that need to specified somewhere. Well, here they are. The default table is empty. Let's view the 'main' table: [ahu@home ahu]$ ip route list table main 195.96.98.253 dev ppp2 proto kernel scope link src 212.64.78.148 212.64.94.1 dev ppp0 proto kernel scope link src 212.64.94.251 10.0.0.0/8 dev eth0 proto kernel scope link src 10.0.0.1 127.0.0.0/8 dev lo scope link default via 212.64.94.1 dev ppp0 We now generate a new rule which we call 'John', for our hypothetical house mate. Although we can work with pure numbers, it's far easier if we add our tables to /etc/iproute2/rt_tables. # echo 200 John >> /etc/iproute2/rt_tables # ip rule add from 10.0.0.10 table John # ip rule ls 0: from all lookup local 32765: from 10.0.0.10 lookup John 32766: from all lookup main 32767: from all lookup default Now all that is left is to generate Johns table, and flush the route cache: # ip route add default via 195.96.98.253 dev ppp2 table John # ip route flush cache And we are done. It is left as an exercise for the reader to implement this in ip-up. 5. GRE and other tunnels There are 3 kinds of tunnels in Linux. There's IP in IP tunneling, GRE tunneling and tunnels that live outside the kernel (like, for example PPTP). 5.1. A few general remarks about tunnels: Tunnels can be used to do some very unusual and very cool stuff. They can also make things go horribly wrong when you don't configure them right. Don't point your default route to a tunnel device unless you know _exactly_ what you are doing :-). Furthermore, tunneling increases overhead, because it needs an extra set of IP headers. Typically this is 20 bytes per packet, so if the normal packet size (MTU) on a network is 1500 bytes, a packet that is sent through a tunnel can only be 1480 bytes big. This is not necessarily a problem, but be sure to read up on IP packet fragmentation/reassembly when you plan to connect large networks with tunnels. Oh, and of course, the fastest way to dig a tunnel is to dig at both sides. 5.2. IP in IP tunneling This kind of tunneling has been available in Linux for a long time. It requires 2 kernel modules, ipip.o and new_tunnel.o. Let's say you have 3 networks: Internal networks A and B, and intermediate network C (or let's say, Internet). So we have network A: network 10.0.1.0 netmask 255.255.255.0 router 10.0.1.1 The router has address 172.16.17.18 on network C. and network B: network 10.0.2.0 netmask 255.255.255.0 router 10.0.2.1 The router has address 172.19.20.21 on network C. As far as network C is concerned, we assume that it will pass any packet sent from A to B and vice versa. You might even use the Internet for this. Here's what you do: First, make sure the modules are installed: insmod ipip.o insmod new_tunnel.o Then, on the router of network A, you do the following: ifconfig tunl0 10.0.1.1 pointopoint 172.19.20.21 route add -net 10.0.2.0 netmask 255.255.255.0 dev tunl0 And on the router of network B: ifconfig tunl0 10.0.2.1 pointopoint 172.16.17.18 route add -net 10.0.1.0 netmask 255.255.255.0 dev tunl0 And if you're finished with your tunnel: ifconfig tunl0 down Presto, you're done. You can't forward broadcast or IPv6 traffic through an IP-in-IP tunnel, though. You just connect 2 IPv4 networks that normally wouldn't be able to talk to each other, that's all. As far as compatibility goes, this code has been around a long time, so it's compatible all the way back to 1.3 kernels. Linux IP-in-IP tun­ neling doesn't work with other Operating Systems or routers, as far as I know. It's simple, it works. Use it if you have to, otherwise use GRE. 5.3. GRE tunneling GRE is a tunneling protocol that was originally developed by Cisco, and it can do a few more things than IP-in-IP tunneling. For example, you can also transport multicast traffic and IPv6 through a GRE tunnel. In Linux, you'll need the ip_gre module. 5.3.1. IPv4 Tunneling Let's do IPv4 tunneling first: Let's say you have 3 networks: Internal networks A and B, and intermediate network C (or let's say, Internet). So we have network A: network 10.0.1.0 netmask 255.255.255.0 router 10.0.1.1 The router has address 172.16.17.18 on network C. Let's call this network neta (ok, hardly original) and network B: network 10.0.2.0 netmask 255.255.255.0 router 10.0.2.1 The router has address 172.19.20.21 on network C. Let's call this network netb (still not original) As far as network C is concerned, we assume that it will pass any packet sent from A to B and vice versa. How and why, we do not care. On the router of network A, you do the following: ip tunnel add netb mode gre remote 172.19.20.21 local 172.16.17.18 ttl 255 ip addr add 10.0.1.1 dev netb ip route add 10.0.2.0/24 dev netb Let's discuss this for a bit. In line 1, we added a tunnel device, and called it netb (which is kind of obvious because that's where we want it to go). Furthermore we told it to use the GRE protocol (mode gre), that the remote address is 172.19.20.21 (the router at the other end), that our tunneling packets should originate from 172.16.17.18 (which allows your router to have several IP addresses on network C and let you decide which one to use for tunneling) and that the TTL field of the packet should be set to 255 (ttl 255). In the second line we gave the newly born interface netb the address 10.0.1.1. This is OK for smaller networks, but when you're starting up a mining expedition (LOTS of tunnels), you might want to consider using another IP range for tunneling interfaces (in this example, you could use 10.0.3.0). In the third line we set the route for network B. Note the different notation for the netmask. If you're not familiar with this notation, here's how it works: you write out the netmask in binary form, and you count all the ones. If you don't know how to do that, just remember that 255.0.0.0 is /8, 255.255.0.0 is /16 and 255.255.255.0 is /24. Oh, and 255.255.254.0 is /23, in case you were wondering. But enough about this, let's go on with the router of network B. ip tunnel add neta mode gre remote 172.16.17.18 local 172.19.20.21 ttl 255 ip addr add 10.0.2.1 dev neta ip route add 10.0.1.0/24 dev neta And when you want to remove the tunnelon router A: ip link set netb down ip tunnel del netb Of course, you can replace netb with neta for router B. 5.3.2. IPv6 Tunneling BIG FAT WARNING !! The following is untested and might therefore be completely and utter BOLLOCKS. Proceed at your own risk. Don't say I didn't warn you. FIXME: check & try all this A short bit about IPv6 addresses: IPv6 addresses are, compared to IPv4 addresses, monstrously big. An example: 3ffe:2502:200:40:281:48fe:dcfe:d9bc So, to make writing them down easier, there are a few rules: · Don't use leading zeroes. Same as in IPv4. · Use colons to separate every 16 bits or two bytes. · When you have lots of consecutive zeroes, you can write this down as ::. You can only do this once in an address and only for quantities of 16 bits, though. Using these rules, the address 3ffe:0000:0000:0000:0000:0020:34A1:F32C can be written down as 3ffe::20:34A1:F32C, which is a lot shorter. On with the tunnels. Let's assume that you have the following IPv6 network, and you want to connect it to 6bone, or a friend. Network 3ffe:406:5:1:5:a:2:1/96 Your IPv4 address is 172.16.17.18, and the 6bone router has IPv4 address 172.22.23.24. ip tunnel add sixbone mode sit remote 172.22.23.24 local 172.16.17.18 ttl 255 ip link set sixbone up ip addr add 3ffe:406:5:1:5:a:2:1/96 dev sixbone ip route add 3ffe::/15 dev sixbone Let's discuss this. In the first line, we created a tunnel device called sixbone. We gave it mode sit (which is IPv6 in IPv4 tunneling) and told it where to go to (remote) and where to come from (local). TTL is set to maximum, 255. Next, we made the device active (up). After that, we added our own network address, and set a route for 3ffe::/15 (which is currently all of 6bone) through the tunnel. GRE tunnels are currently the preferred type of tunneling. It's a standard that's also widely adopted outside the Linux community and therefore a Good Thing. 5.4. Userland tunnels There are literally dozens of implementations of tunneling outside the kernel. Best known are of course PPP and PPTP, but there are lots more (some proprietary, some secure, some that don't even use IP) and that is really beyond the scope of this HOWTO. 6. IPsec: secure IP over the internet FIXME: Waiting for our feature editor Stefan to finish his stuf 7. Multicast routing FIXME: Editor Vacancy! 8. Using Class Based Queueing for bandwidth management Now, when I discovered this, it *really* blew me away. Linux 2.2 comes with everything to manage bandwidth in ways comparable to high-end dedicated bandwidth management systems. Linux even goes far beyond what Frame and ATM provide. The two basic units of Traffic Control are filters and queues. Filters place traffic into queues, and queues gather traffic and decide what to send first, send later, or drop. There are several flavours of filters and queues. The most common filters are fwmark and u32, the first lets you use the Linux netfilter code to select traffic, and the second allows you to select traffic based on ANY header. The most notable queue is Class Based Queue. CBQ is a super-queue, in that it contains other queues (even other CBQs). It may not be immediately clear what queueing has to do with bandwidth management, but it really does work. For our frame of reference, I have modelled this section on an ISP where I learned the ropes, so to speak, Casema Internet in The Netherlands. Casema, which is actually a cable company, has internet needs both for their customers and for their own office. Most corporate computers there have access to the internet. In reality, they have lots of money to spend and do not use Linux for bandwidth management. We will explore how our ISP could have used Linux to manage their bandwidth. 8.1. What is queueing? With queueing we determine the order in which data is *sent*. It it important to realise this, we can only shape data that we transmit. How this changing the order determine the speed of transmission? Imagine a cash register which is able to process 3 customers per minute. People wishing to pay go stand in line at the 'tail end' of the queue. This is 'fifo queueing'. Let's suppose however that we let certain people always join in the middle of the queue, in stead of at the end. These people spend a lot less time in the queue and are therefore able to shop faster. With the way the internet works, we have no direct control of what people send us. It's a bit like your (physical!) mailbox at home. There is no way you can influence the world to modify the amount of mail they send you, short of contacting everybody. However, the internet is mostly based on TCP/IP which has a few features that help us. TCP/IP has no way of knowing the capacity of the network between two hosts, so it just starts sending data faster and faster ('slow start') and when packets start getting lost, because there is no room to send them, it will slow down. This is the equivalent of not reading half of your mail, and hoping that people will stop sending it to you. With the difference that it works for the Internet :-) FIXME: explain that normally, ACKs are used to determine speed [The Internet] ------ [Linux router] --- [Office+ISP] eth1 eth0 Now, our Linux router has two interfaces which I shall dub eth0 and eth1. Eth1 is connected to our router which moves packets from to and from our fibre link. Eth0 is connected to a subnet which contains both the corporate firewall and our network head ends, through which we can connect to our customers. Because we can only limit what we send, we need two separate but possibly very similar sets of rules. By modifying queueing on eth0, we determine how fast data gets sent to our customers, and therefor how much downstream bandwidth is available for them. Their 'download speed' in short. On eth1, we determine how fast we send data to The Internet, how fast our users, both corporate and commercial can upload data. 8.2. First attempt at bandwidth division CBQ enables us to generate several classes, and even classes within classes. The larger devisions might be called 'agencies'. Within these classes may be things like 'bulk' or 'interactive'. For example, we may have a 10 megabit internet connection to 'the internet' which is to be shared by our customers, and our corporate needs. We should not allow a few people at the office to steal away large amounts of bandwidth which we should sell to our customers. On the other hand, or customers should not be able to drown out the traffic from our field offices to the customer database. Previously, one way to solve this was either to use Frame relay/ATM and create virtual circuits. This works, but frame is not very fine grained, ATM is terribly inefficient at carrying IP traffic, and neither have standardised ways to segregate different types of traffic into different VCs. Hover, if you do use ATM, Linux can also happily perform deft acts of fancy traffic classification for you too. Another way is to order separate connections, but this is not very practical and also not very elegant, and still does not solve all your problems. CBQ to the rescue! Clearly we have two main classes, 'ISP' and 'Office'. Initially, we really don't care what the divisions do with their bandwidth, so we don't further subdivide their classes. We decide that the customers should always be guaranteed 8 megabits of downstream traffic, and our office 2 megabits. Setting up traffic control is done with the iproute2 tool tc. # tc qdisc add dev eth0 root handle 10: cbq bandwidth 10Mbit avpkt 1000 Ok, lots of numbers here. What has happened? We have configured the 'queueing discipline' of eth0. With 'root' we denote that this is the root discipline. We have given it the handle '10:'. We want to do CBQ, so we mention that on the command line as well. We tell the kernel that it can allocate 10Mbit and that the average packet size is somewhere around 1000 octets. FIXME: Double check with Alexey the the built in cell calculation is sufficient. FIXME: With a 1500 mtu, the default cell is calculated same as the old example. FIXME: I checked the sources (userspace and kernel), so we should be safe omitting it. Now we need to generate our root class, from which all others descend: # tc class add dev eth0 parent 10:0 classid 10:1 cbq bandwidth 10Mbit rate \ 10Mbit allot 1514 weight 1Mbit prio 8 maxburst 20 avpkt 1000 Even more numbers to worry about - the Linux CBQ implementation is very generic. With 'parent 10:0' we indicate that this class descends from the root of qdisc handle '10:' we generated earlier. With 'classid 10:1' we name this class. We really don't tell the kernel a lot more, we just generate a class that completely fills the available device. We also specify that the MTU (plus some overhead) is 1514 octets. We also 'weigh' this class with 1Mbit - a tuning parameter. We now generate our ISP class: # tc class add dev eth0 parent 10:1 classid 10:100 cbq bandwidth 10Mbit rate \ 8Mbit allot 1514 weight 800Kbit prio 5 maxburst 20 avpkt 1000 \ bounded We allocate 8Mbit, and indicate that this class must not exceed this by adding the 'bounded' parameter. Otherwise this class would have started borrowing bandwidth from other classes, something we will discuss later on. To top it off, we generate the root Office class: # tc class add dev eth0 parent 10:1 classid 10:200 cbq bandwidth 10Mbit rate \ 2Mbit allot 1514 weight 200Kbit prio 5 maxburst 20 avpkt 1000 \ bounded To make this a bit clearer, a diagram which shows our classes: +-------------[10: 10Mbit]----------------------+ |+-------------[10:1 root 10Mbit]--------------+| || || || +-[10:100 8Mbit]-+ +--[10:200 2Mbit]-----+ || || | | | | || || | ISP | | Office | || || | | | | || || +----------------+ +---------------------+ || || || |+---------------------------------------------+| +-----------------------------------------------+ Ok, now we have told the kernel what our classes are, but not yet how to manage the queues. We do this presently, in one fell swoop for both classes. # tc qdisc add dev eth0 parent 10:100 sfq quantum 1514b perturb 15 # tc qdisc add dev eth0 parent 10:200 sfq quantum 1514b perturb 15 In this case we install the Stochastic Fairness Queueing discipline (sfq), which is not quite fair, but works well up to high bandwidths without burning up CPU cycles. There are other queueing disciplines available which are better, but need more CPU. The Token Bucket Filter is often used. Now there is only one thing left to do and that is to explain to the kernel which packets belong to which class. Initially we will do this natively with iproute2, but more interesting applications are possible in combination with netfilter. # tc filter add dev eth0 parent 10:0 protocol ip prio 100 u32 match ip dst \ 150.151.23.24 flowid 10:200 # tc filter add dev eth0 parent 10:0 protocol ip prio 25 u32 match ip dst \ 150.151.0.0/16 flowid 10:100 Here is is assumed that our office hides behind a firewall with IP address 150.151.23.24 and that all our other IP addresses should be considered to be part of the ISP. The u32 match is a very simple one - more sophisticated matching rules are possible when using netfilter to mark our packets, which we can than match on in tc. Now we have fairly divided the downstream bandwidth, we need to do the same for the upstream. For brevity's sake, all in one go: # tc qdisc add dev eth1 root handle 20: cbq bandwidth 10Mbit avpkt 1000 # tc class add dev eth1 parent 20:0 classid 20:1 cbq bandwidth 10Mbit rate \ 10Mbit allot 1514 weight 1Mbit prio 8 maxburst 20 avpkt 1000 # tc class add dev eth1 parent 20:1 classid 20:100 cbq bandwidth 10Mbit rate \ 8Mbit allot 1514 weight 800Kbit prio 5 maxburst 20 avpkt 1000 \ bounded # tc class add dev eth1 parent 20:1 classid 20:200 cbq bandwidth 10Mbit rate \ 2Mbit allot 1514 weight 200Kbit prio 5 maxburst 20 avpkt 1000 \ bounded # tc qdisc add dev eth1 parent 20:100 sfq quantum 1514b perturb 15 # tc qdisc add dev eth1 parent 20:200 sfq quantum 1514b perturb 15 # tc filter add dev eth1 parent 20:0 protocol ip prio 100 u32 match ip src \ 150.151.23.24 flowid 20:200 # tc filter add dev eth1 parent 20:0 protocol ip prio 25 u32 match ip src \ 150.151.0.0/16 flowid 20:100 8.3. What to do with excess bandwidth In our hypothetical case, we will find that even when the ISP customers are mostly offline (say, at 8AM), our office still gets only 2Mbit, which is rather wasteful. By removing the 'bounded' statements, classes will be able to borrow bandwidth from each other. Some classes may not wish to borrow their bandwidth to other classes. Two rival ISPs on a single link may not want to offer each other freebees. In such a case, you can add the keyword 'isolated' at the end of your 'tc class add' lines. 8.4. Class subdivisions FIXME: completely untested suppositions! Try this! We can go further than this. Should the employees at the office decide to all fire up their 'napster' clients, it is still possible that our database runs out of bandwidth. Therefore, we create two subclasses, 'Human' and 'Database'. Our database always needs 500Kbit, so we have 1.5Mbit left for Human consumption. We now need to create two new classes, within our Office class: # tc class add dev eth0 parent 10:200 classid 10:250 cbq bandwidth 10Mbit rate \ 500Kbit allot 1514 weight 50Kbit prio 5 maxburst 20 avpkt 1000 \ bounded # tc class add dev eth0 parent 10:200 classid 10:251 cbq bandwidth 10Mbit rate \ 1500Kbit allot 1514 weight 150Kbit prio 5 maxburst 20 avpkt 1000 \ bounded FIXME: Finish this example! 8.5. Loadsharing over multiple interfaces FIXME: document TEQL 9. More queueing disciplines The Linux kernel offers us lots of queueing disciplines. By far the most widely used is the pfifo_fast queue - this is the default. This also explains why these advanced features are so robust. They are nothing more than 'just another queue'. Each of these queues has specific strengths and weaknesses. Not all of them may be as well tested. 9.1. pfifo_fast This queue is, as the name says, First In, First Out, which means that no packet receives special treatment. At least, not quite. This queue has 3 so called 'bands'. Within each band, FIFO rules apply. However, as long as there are packets waiting in band 0, band 1 won't be processed. Same goes for band 1 and band 2. 9.2. Stochastic Fairness Queueing SFQ, as said earlier, is not quite deterministic, but works (on average). Its main benefits are that it requires little CPU and memory. 'Real' fair queueing requires that the kernel keep track of all running sessions. Stochastic Fairness Queueing (SFQ) is a simple implementation of fair queueing algorithms family. It's less accurate than others, but it also requires less calculations while being almost perfectly fair. The key word in SFQ is conversation (or flow), being a sequence of data packets having enough common parameters to distinguish it from other conversations. The parameters used in case of IP packets are source and destination address, and the protocol number. SFQ consists of dynamically allocated number of FIFO queues, one queue for one conversation. The discipline runs in round-robin, sending one packet from each FIFO in one turn, and this is why it's called fair. The main advantage of SFQ is that it allows fair sharing the link between several applications and prevent bandwidth take-over by one client. SFQ however cannot determine interactive flows from bulk ones -- one usually needs to do the selection with CBQ before, and then direct the bulk traffic into SFQ. 9.3. Token Bucket Filter The Token Bucket Filter (TBF) is a simple queue, that only passes packets arriving at rate in bounds of some administratively set limit, with possibility to buffer short bursts. The TBF implementation consists of a buffer (bucket), constatly filled by some virtual pieces of information called tokens, at specific rate (token rate). The most important parameter of the bucket is its size, that is number of tokens it can store. Each arriving token lets one incoming data packet of out the queue and is then deleted from the bucket. Associating this algorithm with the two flows -- token and data, gives us three possible scenarios: · The data arrives into TBF at rate equal the rate of incoming tokens. In this case each incoming packet has its matching token and passes the queue without delay. · The data arrives into TBF at rate smaller than the token rate. Only some tokens are deleted at output of each data packet sent out the queue, so the tokens accumulate, up to the bucket size. The saved tokens can be then used to send data over the token rate, if short data burst occurs. · The data arrives into TBF at rate bigger than the token rate. In this case filter overrun occurs -- incoming data can be only sent out without loss until all accumulated tokens are used. After that, overlimit packets are dropped. The last scenario is very important, because it allows to administratively shape the bandwidth available to data, passing the filter. The accumulation of tokens allows short burst of overlimit data to be still passed without loss, but any lasting overload will cause packets to be constantly dropped. The Linux kernel seems to go beyond this specification, and also allows us to limit the speed of the burst transmission. However, Alexey warns us: Note that the peak rate TBF is much more tough: with MTU 1500 P_crit = 150Kbytes/sec. So, if you need greater peak rates, use alpha with HZ=1000 :-) FIXME: is this still true with TSC (pentium+)? Well sort of FIXME: if not, add section on raising HZ 9.4. Random Early Detect RED has some extra smartness built in. When a TCP/IP session starts, neither end knows the amount of bandwidth available. So TCP/IP starts to transmit slowly and goes faster and faster, though limited by the latency at which ACKs return. Once a link is filling up, RED starts dropping packets, which indicate to TCP/IP that the link is congested, and that it should slow down. The smart bit is that RED simulates real congestion, and starts to drop some packets some time before the link is entirely filled up. Once the link is completely saturated, it behaves like a normal policer. For more information on this, see the Backbone chapter. 9.5. Ingress policer qdisc The Ingress qdisc comes in handy if you need to ratelimit a host without help from routers or other Linux boxes. You can police incoming bandwidth and drop packets when this bandwidth exceeds your desired rate. This can save your host from a SYN flood, for example, and also works to slow down TCP/IP, which responds to dropped packets by reducing speed. FIXME: instead of dropping, can we also assign it to a real queue? FIXME: shaping by dropping packets seems less desirable than using, for example, a token bucket filter. Not sure though, Cisco CAR works this way, and people appear happy with it. See the reference to ``IOS Committed Access Rate'' at the end of this document. In short: you can use this to limit how fast your computer downloads files, thus leaving more of the available bandwidth for others. See the section on protecting your host from SYN floods for an example on how this works. 10. Netfilter & iproute - marking packets So far we've seen how iproute works, and netfilter was mentioned a few times. This would be a good time to browse through Rusty's Remarkably Unreliable guides . Netfilter itself can be found here . Netfilter allows us to filter packets, or mangle their headers. One special feature is that we can mark a packet with a number. This is done with the --set-mark facility. As an example, this command marks all packets destined for port 25, outgoing mail: # iptables -A PREROUTING -i eth0 -t mangle -p tcp --dport 25 \ -j MARK --set-mark 1 Let's say that we have multiple connections, one that is fast (and expensive, per megabyte) and one that is slower, but flat fee. We would most certainly like outgoing mail to go via the cheap route. We've already marked the packets with a '1', we now instruct the routing policy database to act on this: # echo 201 mail.out >> /etc/iproute2/rt_tables # ip rule add fwmark 1 table mail.out # ip rule ls 0: from all lookup local 32764: from all fwmark 1 lookup mail.out 32766: from all lookup main 32767: from all lookup default Now we generate the mail.out table with a route to the slow but cheap link: # /sbin/ip route add default via 195.96.98.253 dev ppp0 table mail.out And we are done. Should we want to make exceptions, there are lots of ways to achieve this. We can modify the netfilter statement to exclude certain hosts, or we can insert a rule with a lower priority that points to the main table for our excepted hosts. We can also use this feature to honour TOS bits by marking packets with a different type of service with different numbers, and creating rules to act on that. This way you can even dedicate, say, an ISDN line to interactive sessions. Needless to say, this also works fine on a host that's doing NAT ('masquerading'). Note: for this to work, you need to have some options enabled in your kernel: IP: advanced router (CONFIG_IP_ADVANCED_ROUTER) [Y/n/?] IP: policy routing (CONFIG_IP_MULTIPLE_TABLES) [Y/n/?] IP: use netfilter MARK value as routing key (CONFIG_IP_ROUTE_FWMARK) [Y/n/?] 11. More classifiers Classifiers are the way by which the kernel decides which queue a packet should be placed into. There are various different classifiers, each of which can be used for different purposes. fw Bases the decision on how the firewall has marked the packet. u32 Bases the decision on fields within the packet (i.e. source IP address, etc) route Bases the decision on which route the packet will be routed by. rsvp, rsvp6 Bases the decision on the target (destination,protocol) and optionally the source as well. (I think) tcindex FIXME: Fill me in Note that in general there are many ways in which you can classify packet and that it generally comes down to preference as to which system you wish to use. Classifiers in general accept a few arguments in common. They are listed here for convenience: protocol The protocol this classifier will accept. Generally you will only be accepting only IP traffic. Required. parent The handle this classifier is to be attached to. This handle must be an already existing class. Required. prio The priority of this classifier. Higher numbers get tested first. handle This handle means different things to different filters. FIXME: Add more All the following sections will assume you are trying to shape the traffic going to HostA. They will assume that the root class has been configured on 1: and that the class you want to send the selected traffic to is 1:1. 11.1. The "fw" classifier The "fw" classifier relies on the firewall tagging the packets to be shaped. So, first we will setup the firewall to tag them: # iptables -I PREROUTING -t mangle -p tcp -d HostA \ -j MARK --set-mark 1 Now all packets to that machine are tagged with the mark 1. Now we build the packet shaping rules to actually shape the packets. Now we just need to indicate that we want the packets that are tagged with the mark 1 to go to class 1:1. This is accomplished with the command: # tc filter add dev eth1 protocol ip parent 1:0 prio 1 handle 1 fw classid 1:1 This should be fairly self-explanatory. Attach to the 1:0 class a filter with priority 1 to filter all packet marked with 1 in the firewall to class 1:1. Note how the handle here is used to indicate what the mark should be. That's all there is to it! This is the (IMHO) easy way, the other ways are I think harder to understand. Note that you can apply the full power of the firewalling code with this classifier, including matching MAC addresses, user IDs and anything else the firewall can match. 11.2. The "u32" classifier The U32 filter is the most advanced filter available in the current implementation. It entirely based on hashing tables, which make it robust when there are many filter rules. In its simplest form the U32 filter is a list of records, each consisting of two fields: a selector and an action. The selectors, described below, are compared with the currently processed IP packet until the first match and the associated action is performed. The simplest type of action would be directing the packet into defined CBQ class. The commandline of tc filter program, used to configure the filter, consists of three parts: filter specification, a selector and an action. The filter specification can be defined as: tc filter add dev IF [ protocol PROTO ] [ (preference|priority) PRIO ] [ parent CBQ ] The protocol field describes protocol that the filter will be applied to. We will only discuss case of ip protocol. The preference field (priority can be used alternatively) sets the priority of currently defined filter. This is important, since you can have several filters (lists of rules) with different priorities. Each list will be passed in the order the rules were added, then list with lower priority (higher preference number) will be processed. The parent field defines the CBQ tree top (e.g. 1:0), the filter should be attached to. The options decribed apply to all filters, not only U32. 11.2.1. U32 selector The U32 selector contains definition of the pattern, that will be matched to the currently processed packet. Precisely, it defines which bits are to be matched in the packet header and nothing more, but this simple method is very powerful. Let's take a look at the following examplesm taken directly from a pretty complex, real-world filter: # filter parent 1: protocol ip pref 10 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:3 \ match 00100000/00ff0000 at 0 For now, leave the first line alone - all these parameters describe the filter's hash tables. Focus on the selector line, containing match keyword. This selector will match to IP headers, whose second byte will be 0x10 (0010). As you can guess, the 00ff number is the match mask, telling the filter exactly which bits to match. Here it's 0xff, so the byte will match if it's exactly 0x10. The at keyword means that the match is to be started at specified offset (in bytes) -- in this case it's beginning of the packet. Translating all that to human language, the packet will match if its Type of Service field will have ,,low delay'' bits set. Let's analyze another rule: # filter parent 1: protocol ip pref 10 u32 fh 800::803 order 2051 key ht 800 bkt 0 flowid 1:3 \ match 00000016/0000ffff at nexthdr+0 The nexthdr option means next header encapsulated in the IP packet, i.e. header of upper-layer protocol. The match will also start here at the beginning of the next header. The match should occur in the second, 32-bit word of the header. In TCP and UDP protocols this field contains packet's destination port. The number is given in big-endian format, i.e. older bits first, so we simply read 0x0016 as 22 decimal, which stands for SSH service if this was TCP. As you guess, this match is ambigous without a context, and we will discuss this later. Having understood all the above, we will find the following selector quite easy to read: match c0a80100/ffffff00 at 16. What we got here is a three byte match at 17-th byte, counting from the IP header start. This will match for packets with destination address anywhere in 192.168.1/24 network. After analyzing the examples, we can summarize what we have learnt. 11.2.2. General selectors General selectors define the pattern, mask and offset the pattern will be matched to the packet contents. Using the general selectors you can match virtually any single bit in the IP (or upper layer) header. They are more difficult to write and read, though, than specific selectors that described below. The general selector syntax is: match [ u32 | u16 | u8 ] PATTERN MASK [ at OFFSET | nexthdr+OFFSET] One of the keywords u32, u16 or u8 specifies length of the pattern in bits. PATTERN and MASK should follow, of length defined by the previous keyword. The OFFSET parameter is the offset, in bytes, to start matching. If nexthdr+ keyword is given, the offset is relative to start of the upper layer header. Some examples: # tc filter add dev ppp14 parent 1:0 prio 10 u32 \ match u8 64 0xff at 8 \ flowid 1:4 Packet will match to this rule, if its time to live (TTL) is 64. TTL is the field starting just after 8-th byte of the IP header. # tc filter add dev ppp14 parent 1:0 prio 10 u32 \ match u8 0x10 0xff at nexthdr+13 \ protocol tcp \ flowid 1:3 \ This rule will only match TCP packets with ACK bit set. Here we can see an example of using two selectors, the final result will be logical AND of their results. If we take a look at TCP header diagram, we can see that the ACK bit is second older bit (0x10) in the 14-th byte of the TCP header (at nexthdr+13). As for the second selector, if we'd like to make our life harder, we could write match u8 0x06 0xff at 9 instead if using the specific selector protocol tcp, because 6 is the number of TCP protocol, present in 10-th byte of the IP header. On the other hand, in this example we couldn't use any specific selector for the first match - simply because there's no specific selector to match TCP ACK bits. 11.2.3. Specific selectors The following table contains a list of all specific selectors the author of this section has found in the tc program source code. They simply make your life easier and increase readability of your filter's configuration. FIXME: table placeholder - the table is in separate file ,,selector.html'' FIXME: it's also still in Polish :-( FIXME: must be sgml'ized Some examples: # tc filter add dev ppp0 parent 1:0 prio 10 u32 \ match ip tos 0x10 0xff \ flowid 1:4 The above rule will match packets, which have the TOS field set to 0x10. The TOS field starts at second byte of the packet and is one byte big, so we coul write an equivalent general selector: match u8 0x10 0xff at 1. This gives us hint to the internals of U32 filter -- the specific rules are always translated to general ones, and in this form they are stored in the kernel memory. This leads to another conclusion -- the tcp and udp selectors are exactly the same and this is why you can't use single match tcp dst 53 0xffff selector to match TCP packets sent to given port -- they will also match UDP packets sent to this port. You must remember to also specify the protocol and end up with the following rule: # tc filter add dev ppp0 parent 1:0 prio 10 u32 \ match tcp dst 53 0xffff \ match ip protocol 0x6 0xff \ flowid 1:2 11.3. The "route" classifier This classifier filters based on the results of the routing tables. When a packet that is traversing through the classes reaches one that is marked with the "route" filter, it splits the packets up based on information in the routing table. # tc filter add dev eth1 parent 1:0 protocol ip prio 100 route Here we add a route classifier onto the parent node 1:0 with priority 100. When a packet reaches this node (which, since it is the root, will happen immediately) it will consult the routing table and if one matches will send it to the given class and give it a priority of 100. Then, to finally kick it into action, you add the appropriate routing entry: The trick here is to define 'realm' based on either destination or source. The way to do it is like this: # ip route add Host/Network via Gateway dev Device realm RealmNumber For instance, we can define our destination network 192.168.10.0 with a realm number 10: # ip route add 192.168.10.0/24 via 192.168.10.1 dev eth1 realm 10 When adding route filters, we can use realm numbers to represent the networks or hosts and specify how the routes match the filters. # tc filter add dev eth1 parent 1:0 protocol ip prio 100 \ route to 10 classid 1:10 The above rule says packets going to the network 192.168.10.0 match class id 1:10. Route filter can also be used to match source routes. For example, there is a subnetwork attached to the Linux router on eth2. # ip route add 192.168.2.0/24 dev eth2 realm 2 # tc filter add dev eth1 parent 1:0 protocol ip prio 100 \ route from 2 classid 1:2 Here the filter specifies that packets from the subnetwork 192.168.2.0 (realm 2) will match class id 1:2. 11.4. The "rsvp" classifier FIXME: Fill me in 11.5. The "tcindex" classifier FIXME: Fill me in 12. Kernel network parameters The kernel has lots of parameters which can be tuned for different circumstances. While, as usual, the default parameters serve 99% of installations very well, we don't call this the Advanced HOWTO for the fun of it! The interesting bits are in /proc/sys/net, take a look there. Not everything will be documented here initially, but we're working on it. 12.1. Reverse Path Filtering By default, routers route everything, even packets which 'obviously' don't belong on your network. A common example is private IP space escaping onto the internet. If you have an interface with a route of 195.96.96.0/24 to it, you do not expect packets from 212.64.94.1 to arrive there. Lots of people will want to turn this feature off, so the kernel hackers have made it easy. There are files in /proc where you can tell the kernel to do this for you. The method is called "Reverse Path Filtering". Basically, if the reply to this packet wouldn't go out the interface this packet came in, then this is a bogus packet and should be ignored. The following fragment will turn this on for all current and future interfaces. # for i in /proc/sys/net/ipv4/conf/*/rp_filter ; do > echo 2 > $i > done Going by the example above, if a packet arrived on the Linux router on eth1 claiming to come from the Office+ISP subnet, it would be dropped. Similarly, if a packet came from the Office subnet, claiming to be from somewhere outside your firewall, it would be dropped also. The above is full reverse path filtering. The default is to only filter based on IPs that are on directly connected networks. This is because the full filtering breaks in the case of asymmetric routing (where packets come in one way and go out another, like satellite traffic, or if you have dynamic (bgp, ospf, rip) routes in your network. The data comes down through the satellite dish and replies go back through normal land-lines). If this exception applies to you (and you'll probably know if it does) you can simply turn off the rp_filter on the interface where the satellite data comes in. If you want to see if any packets are being dropped, the log_martians file in the same directory will tell the kernel to log them to your syslog. # echo 1 >/proc/sys/net/ipv4/conf//log_martians FIXME: is setting the conf/{default,all}/* files enough? - martijn 12.2. Obscure settings Ok, there are a lot of parameters which can be modified. We try to list them all. Also documented (partly) in Documentation/ip- sysctl.txt. Some of these settings have different defaults based on wether you answered 'Yes' to 'Configure as router and not host' while compiling your kernel. 12.2.1. Generic ipv4 As a generic note, most rate limiting features don't work on loopback, so don't test them locally. /proc/sys/net/ipv4/icmp_destunreach_rate FIXME: fill this in /proc/sys/net/ipv4/icmp_echo_ignore_all FIXME: fill this in /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts [Useful] If you ping the broadcast address of a network, all hosts are supposed to respond. This makes for a dandy denial-of-service tool. Set this to 1 to ignore these broadcast messages. /proc/sys/net/ipv4/icmp_echoreply_rate FIXME: fill this in /proc/sys/net/ipv4/icmp_ignore_bogus_error_responses FIXME: fill this in /proc/sys/net/ipv4/icmp_paramprob_rate FIXME: fill this in /proc/sys/net/ipv4/icmp_timeexceed_rate This the famous cause of the 'Solaris middle star' in traceroutes. Limits number of ICMP Time Exceeded messages sent. FIXME: Units of these rates - either I'm stupid, or this just doesn't work /proc/sys/net/ipv4/igmp_max_memberships FIXME: fill this in /proc/sys/net/ipv4/inet_peer_gc_maxtime FIXME: fill this in /proc/sys/net/ipv4/inet_peer_gc_mintime FIXME: fill this in /proc/sys/net/ipv4/inet_peer_maxttl FIXME: fill this in /proc/sys/net/ipv4/inet_peer_minttl FIXME: fill this in /proc/sys/net/ipv4/inet_peer_threshold FIXME: fill this in /proc/sys/net/ipv4/ip_autoconfig FIXME: fill this in /proc/sys/net/ipv4/ip_default_ttl Time To Live of packets. Set to a safe 64. Raise it if you have a huge network. Don't do so for fun - routing loops cause much more damage that way. You might even consider lowering it in some circumstances. /proc/sys/net/ipv4/ip_dynaddr You need to set this if you use dial-on-demand with a dynamic interface address. Once your demand interface comes up, any queued packets will be rebranded to have the right address. This solves the problem that the connection that brings up your interface itself does not work, but the second try does. /proc/sys/net/ipv4/ip_forward If the kernel should attempt to forward packets. Off by default for hosts, on by default when configured as a router. /proc/sys/net/ipv4/ip_local_port_range Range of local ports for outgoing connections. Actually quite small by default, 1024 to 4999. /proc/sys/net/ipv4/ip_no_pmtu_disc Set this if you want to disable Path MTU discovery - a technique to determince the largest Maximum Transfer Unit possible on you path. /proc/sys/net/ipv4/ipfrag_high_thresh FIXME: fill this in /proc/sys/net/ipv4/ipfrag_low_thresh FIXME: fill this in /proc/sys/net/ipv4/ipfrag_time FIXME: fill this in /proc/sys/net/ipv4/tcp_abort_on_overflow FIXME: fill this in /proc/sys/net/ipv4/tcp_fin_timeout FIXME: fill this in /proc/sys/net/ipv4/tcp_keepalive_intvl FIXME: fill this in /proc/sys/net/ipv4/tcp_keepalive_probes FIXME: fill this in /proc/sys/net/ipv4/tcp_keepalive_time FIXME: fill this in /proc/sys/net/ipv4/tcp_max_orphans FIXME: fill this in /proc/sys/net/ipv4/tcp_max_syn_backlog FIXME: fill this in /proc/sys/net/ipv4/tcp_max_tw_buckets FIXME: fill this in /proc/sys/net/ipv4/tcp_orphan_retries FIXME: fill this in /proc/sys/net/ipv4/tcp_retrans_collapse FIXME: fill this in /proc/sys/net/ipv4/tcp_retries1 FIXME: fill this in /proc/sys/net/ipv4/tcp_retries2 FIXME: fill this in /proc/sys/net/ipv4/tcp_rfc1337 FIXME: fill this in /proc/sys/net/ipv4/tcp_sack Use Selective ACK which can be used to signify that only a single packet is missing - therefore helping fast recovery. /proc/sys/net/ipv4/tcp_stdurg FIXME: fill this in /proc/sys/net/ipv4/tcp_syn_retries FIXME: fill this in /proc/sys/net/ipv4/tcp_synack_retries FIXME: fill this in /proc/sys/net/ipv4/tcp_timestamps FIXME: fill this in /proc/sys/net/ipv4/tcp_tw_recycle FIXME: fill this in /proc/sys/net/ipv4/tcp_window_scaling TCP/IP normally allows windows up to 65535 bytes big. For really fast networks, this may not be enough. The window scaling options allows for almost gigabyte windows, which is good for high bandwidth*delay products. 12.2.2. Per device settings DEV can either stand for a real interface, or for 'all' or 'default'. Default also changes settings for interfaces yet to be created. /proc/sys/net/ipv4/conf/DEV/accept_redirects If a router decides that you are using it for a wrong purpose (ie, it needs to resend your packet on the same interface), it will send us a ICMP Redirect. This is a slight security risk however, so you may want to turn it off, or use secure redirects. /proc/sys/net/ipv4/conf/DEV/accept_source_route Not used very much anymore. You used to be able to give a packet a list of IP addresses it should visit on its way. Linux can be made to honor this IP option. /proc/sys/net/ipv4/conf/DEV/bootp_relay FIXME: fill this in /proc/sys/net/ipv4/conf/DEV/forwarding FIXME: /proc/sys/net/ipv4/conf/DEV/log_martians See the section on reverse path filters. /proc/sys/net/ipv4/conf/DEV/mc_forwarding If we do multicast forwarding on this interface /proc/sys/net/ipv4/conf/DEV/proxy_arp FIXME: fill this in /proc/sys/net/ipv4/conf/DEV/rp_filter See the section on reverse path filters. /proc/sys/net/ipv4/conf/DEV/secure_redirects FIXME: fill this in /proc/sys/net/ipv4/conf/DEV/send_redirects If we send the above mentioned redirects. /proc/sys/net/ipv4/conf/DEV/shared_media FIXME: fill this in /proc/sys/net/ipv4/conf/DEV/tag FIXME: fill this in 12.2.3. Neighbor pollicy Dev can either stand for a real interface, or for 'all' or 'default'. Default also changes settings for interfaces yet to be created. /proc/sys/net/ipv4/neigh/DEV/anycast_delay FIXME: fill this in /proc/sys/net/ipv4/neigh/DEV/app_solicit FIXME: fill this in /proc/sys/net/ipv4/neigh/DEV/base_reachable_time FIXME: fill this in /proc/sys/net/ipv4/neigh/DEV/delay_first_probe_time FIXME: fill this in /proc/sys/net/ipv4/neigh/DEV/gc_stale_time FIXME: fill this in /proc/sys/net/ipv4/neigh/DEV/locktime FIXME: fill this in /proc/sys/net/ipv4/neigh/DEV/mcast_solicit FIXME: fill this in /proc/sys/net/ipv4/neigh/DEV/proxy_delay FIXME: fill this in /proc/sys/net/ipv4/neigh/DEV/proxy_qlen FIXME: fill this in /proc/sys/net/ipv4/neigh/DEV/retrans_time FIXME: fill this in /proc/sys/net/ipv4/neigh/DEV/ucast_solicit FIXME: fill this in /proc/sys/net/ipv4/neigh/DEV/unres_qlen FIXME: fill this in 12.2.4. Routing settings /proc/sys/net/ipv4/route/error_burst FIXME: fill this in /proc/sys/net/ipv4/route/error_cost FIXME: fill this in /proc/sys/net/ipv4/route/flush FIXME: fill this in /proc/sys/net/ipv4/route/gc_elasticity FIXME: fill this in /proc/sys/net/ipv4/route/gc_interval FIXME: fill this in /proc/sys/net/ipv4/route/gc_min_interval FIXME: fill this in /proc/sys/net/ipv4/route/gc_thresh FIXME: fill this in /proc/sys/net/ipv4/route/gc_timeout FIXME: fill this in /proc/sys/net/ipv4/route/max_delay FIXME: fill this in /proc/sys/net/ipv4/route/max_size FIXME: fill this in /proc/sys/net/ipv4/route/min_adv_mss FIXME: fill this in /proc/sys/net/ipv4/route/min_delay FIXME: fill this in /proc/sys/net/ipv4/route/min_pmtu FIXME: fill this in /proc/sys/net/ipv4/route/mtu_expires FIXME: fill this in /proc/sys/net/ipv4/route/redirect_load FIXME: fill this in /proc/sys/net/ipv4/route/redirect_number FIXME: fill this in /proc/sys/net/ipv4/route/redirect_silence FIXME: fill this in 13. Backbone applications of traffic control This chapter is meant as an introduction to backbone routing, which often involves >100 megabit bandwidths, which requires a different approach then your ADSL modem at home. 13.1. Router queues The normal behaviour of router queues on the Internet is called tail- drop. Tail-drop works by queueing up to a certain amount, then dropping all traffic that 'spills over'. This is very unfair, and also leads to retransmit synchronisation. When retransmit synchronisation occurs, the sudden burst of drops from a router that has reached its fill will cause a delayed burst of retransmits, which will over fill the congested router again. In order to cope with transient congestion on links, backbone routers will often implement large queues. Unfortunately, while these queues are good for throughput, they can substantially increase latency and cause TCP connections to behave very bursty during congestion. These issues with tail-drop are becoming increasingly troublesome on the Internet because the use of network unfriendly applications is increasing. The Linux kernel offers us RED, short for Random Early Detect. RED isn't a cure-all for this, applications which inappropriately fail to implement exponential backoff still get an unfair share of the bandwidth, however, with RED they do not cause as much harm to the throughput and latency of other connections. RED statistically drops packets from flows before it reaches its hard limit. This causes a congested backbone link to slow more gracefully, and prevents retransmit synchronisation. This also helps TCP find its 'fair' speed faster by allowing some packets to get dropped sooner keeping queue sizes low and latency under control. The probability of a packet being dropped from a particular connection is proportional to its bandwidth usage rather then the number of packets it transmits. RED is a good queue for backbones, where you can't afford the complexity of per-session state tracking needed by fairness queueing. In order to use RED, you must decide on three parameters: Min, Max, and burst. Min sets the minimum queue size in bytes before dropping will begin, Max is a soft maximum that the algorithm will attempt to stay under, and burst sets the maximum number of packets that can 'burst through'. You should set the min by calculating that highest acceptable base queueing latency you wish, and multiply it by your bandwidth. For instance, on my 64kbit/s ISDN link, I might want a base queueing latency of 200ms so I set min to 1600 bytes. Setting min too small will degrade throughput and too large will degrade latency. Setting a small min is not a replacement for reducing the MTU on a slow link to improve interactive response. You should make max at least twice min to prevent synchronisation. On slow links with small min's it might be wise to make max perhaps four or more times large then min. Burst controls how the RED algorithm responds to bursts. Burst must be set large then min/avpkt. Experimentally, I've found (min+min+max)/(3*avpkt) to work okay. Additionally, you need to set limit and avpkt. Limit is a safety value, after there are limit bytes in the queue, RED 'turns into' tail-drop. I typical set limit to eight times max. Avpkt should be your average packet size. 1000 works okay on high speed Internet links with a 1500byte MTU. Read the paper on RED queueing by Sally Floyd and Van Jacobson for technical information. FIXME: more needed. This means *you* greg :-) - ahu 14. Shaping Cookbook This section contains 'cookbook' entries which may help you solve problems. A cookbook is no replacement for understanding however, so try and comprehend what is going on. 14.1. Running multiple sites with different SLAs You can do this in several ways. Apache has some support for this with a module, but we'll show how Linux can do this for you, and do so for other services as well. These commands are stolen from a presentation by Jamal Hadi that's referenced below. Let's say we have two customers, with http, ftp and streaming audio, and we want to sell them a limited amount of bandwidth. We do so on the server itself. Customer A should have at most 2 megabits, cusomer B has paid for 5 megabits. We separate our customers by creating virtual IP addresses on our server. # ip address add 188.177.166.1 dev eth0 # ip address add 188.177.166.2 dev eth0 It is up to you to attach the different servers to the right IP address. All popular daemons have support for this. We first attach a CBQ qdisc to eth0: # tc qdisc add dev eth0 root handle 1: bandwidth 10Mbit cell 8 avpkt 1000 \ mpu 64 We then create classes for our customers: # tc class add dev eth0 parent 1:0 classid 1:1 cbq bandwidth 10Mbit rate \ 2MBit avpkt 1000 prio 5 bounded isolated allot 1514 weight 1 maxburst 21 # tc class add dev eth0 parent 1:0 classid 1:2 cbq bandwidth 10Mbit rate \ 5Mbit avpkt 1000 prio 5 bounded isolated allot 1514 weight 1 maxburst 21 Then we add filters for our two classes: ##FIXME: Why this line, what does it do?, what is a divisor?: ##FIXME: A divisor has something to do with a hash table, and the number of ## buckets - ahu # tc filter add dev eth0 parent 1:0 protocol ip prio 5 handle 1: u32 divisor 1 # tc filter add dev eth0 parent 1:0 prio 5 u32 match ip src 188.177.166.1 flowid 1:1 # tc filter add dev eth0 parent 1:0 prio 5 u32 match ip src 188.177.166.2 flowid 1:2 And we're done. FIXME: why no token bucket filter? is there a default pfifo_fast fallback somewhere? 14.2. Protecting your host from SYN floods From Alexeys iproute documentation, adapted to netfilter and with more plausible paths. If you use this, take care to adjust the numbers to reasonable values for your system. If you want to protect an entire network, skip this script, which is best suited for a single host. #! /bin/sh -x # # sample script on using the ingress capabilities # this script shows how one can rate limit incoming SYNs # Useful for TCP-SYN attack protection. You can use # IPchains to have more powerful additions to the SYN (eg # in addition the subnet) # #path to various utilities; #change to reflect yours. # TC=/sbin/tc IP=/sbin/ip IPTABLES=/sbin/iptables INDEV=eth2 # # tag all incoming SYN packets through $INDEV as mark value 1 ############################################################ $iptables -A PREROUTING -i $INDEV -t mangle -p tcp --syn \ -j MARK --set-mark 1 ############################################################ # # install the ingress qdisc on the ingress interface ############################################################ $TC qdisc add dev $INDEV handle ffff: ingress ############################################################ # # # SYN packets are 40 bytes (320 bits) so three SYNs equals # 960 bits (approximately 1kbit); so we rate limit below # the incoming SYNs to 3/sec (not very sueful really; but #serves to show the point - JHS ############################################################ $TC filter add dev $INDEV parent ffff: protocol ip prio 50 handle 1 fw \ police rate 1kbit burst 40 mtu 9k drop flowid :1 ############################################################ # echo "---- qdisc parameters Ingress ----------" $TC qdisc ls dev $INDEV echo "---- Class parameters Ingress ----------" $TC class ls dev $INDEV echo "---- filter parameters Ingress ----------" $TC filter ls dev $INDEV parent ffff: #deleting the ingress qdisc #$TC qdisc del $INDEV ingress 14.3. Ratelimit ICMP to prevent dDoS Recently, distributed denial of service attacks have become a major nuisance on the internet. By properly filtering and ratelimiting your network, you can both prevent becoming a casualty or the cause of these attacks. You should filter your networks so that you do not allow non-local IP source addressed packets to leave your network. This stops people from anonymously sending junk to the internet. Rate limiting goes much as shown earlier. To refresh your memory, our ASCIIgram again: [The Internet] ------ [Linux router] --- [Office+ISP] eth1 eth0 We first set up the prerequisite parts: # tc qdisc add dev eth0 root handle 10: cbq bandwidth 10Mbit avpkt 1000 # tc class add dev eth0 parent 10:0 classid 10:1 cbq bandwidth 10Mbit rate \ 10Mbit allot 1514 prio 5 maxburst 20 avpkt 1000 If you have 100Mbit, or more, interfaces, adjust these numbers. Now you need to determine how much ICMP traffic you want to allow. You can perform measurements with tcpdump, by having it write to a file for a while, and seeing how much ICMP passes your network. Do not forget to raise the snapshot length! If measurement is impractical, you might want to choose 5% of your available bandwidth. Let's set up our class: # tc class add dev eth0 parent 10:1 classid 10:100 cbq bandwidth 10Mbit rate \ 100Kbit allot 1514 weight 800Kbit prio 5 maxburst 20 avpkt 250 \ bounded This limits at 100Kbit. Now we need a filter to assign ICMP traffic to this class: # tc filter add dev eth0 parent 10:0 protocol ip prio 100 u32 match ip protocol 1 0xFF flowid 10:100 14.4. Prioritising interactive traffic If lots of data is coming down your link, or going up for that matter, and you are trying to do some maintenance via telnet or ssh, this may not go too well. Other packets are blocking your keystrokes. Wouldn't it be great if there were a way for your interactive packets to sneak past the bulk traffic? Linux can do this for you! As before, we need to handle traffic going both ways. Evidently, this works best if there are Linux boxes on both ends of your link, although other UNIX's are able to do this. Consult your local Solaris/BSD guru for this. The standard pfifo_fast scheduler has 3 different 'bands'. Traffic in band 0 is transmitted first, after which traffic in band 1 and 2 gets considered. It is vital that our interactive traffic be in band 0! We blatantly adapt from the (soon to be obsolete) ipchains HOWTO: There are four seldom-used bits in the IP header, called the Type of Service (TOS) bits. They effect the way packets are treated; the four bits are "Minimum Delay", "Maximum Throughput", "Maximum Reliability" and "Minimum Cost". Only one of these bits is allowed to be set. Rob van Nieuwkerk, the author of the ipchains TOS-mangling code, puts it as follows: Especially the "Minimum Delay" is important for me. I switch it on for "interactive" packets in my upstream (Linux) router. I'm behind a 33k6 modem link. Linux prioritises packets in 3 queues. This way I get acceptable interactive performance while doing bulk downloads at the same time. The most common use is to set telnet & ftp control connections to "Minimum Delay" and FTP data to "Maximum Throughput". This would be done as follows, on your upstream router: # iptables -A PREROUTING -t mangle -p tcp --sport telnet \ -j TOS --set-tos Minimize-Delay # iptables -A PREROUTING -t mangle -p tcp --sport ftp \ -j TOS --set-tos Minimize-Delay # iptables -A PREROUTING -t mangle -p tcp --sport ftp-data \ -j TOS --set-tos Maximize-Throughput Now, this only works for data going from your telnet foreign host to your local computer. The other way around appears to be done for you, ie, telnet, ssh & friends all set the TOS field on outgoing packets automatically. Should you have a client that does not do this, you can always do it with netfilter. On your local box: # iptables -A OUTPUT -t mangle -p tcp --dport telnet \ -j TOS --set-tos Minimize-Delay # iptables -A OUTPUT -t mangle -p tcp --dport ftp \ -j TOS --set-tos Minimize-Delay # iptables -A OUTPUT -t mangle -p tcp --dport ftp-data \ -j TOS --set-tos Maximize-Throughput 15. Advanced Linux Routing This section is for all you people who either want to understand why the whole system works or have a configuration that's so bizarre that you need the low down to make it work. This section is completely optional. It's quite possible that this section will be quite complex and really not intended for normal users. You have been warned. FIXME: Decide what really need to go in here. 15.1. How does packet queueing really work? This is the low-down on how the packet queueing system really works. Lists the steps the kernel takes to classify a packet, etc... FIXME: Write this. 15.2. Advanced uses of the packet queueing system Go through Alexeys extremely tricky example involving the unused bits in the TOS field. FIXME: Write this. 15.3. Other packet shaping systems I'd like to include a brief description of other packet shaping systems in other operating systems and how they compare to the Linux one. Since Linux is one of the few OSes that has a completely original (non-BSD derived) TCP/IP stack, I think it would be useful to see how other people do it. Unfortunately I have no experiene with other systems so cannot write this. FIXME: Anyone? - Martijn 16. Dynamic routing - OSPF and BGP Once your network starts to get really big, or you start to consider 'the internet' as your network, you need tools which dynamically route your data. Sites are often connected to each other with multiple links, and more are popping up all the time. The Internet has mostly standardised on OSPF and BGP4 (rfc1771). Linux supports both, by way of gated and zebra While currently not within the scope of this document, we would like to point you to the definitive works: Overview: Cisco Systems Designing large-scale IP internetworks For OSPF: Moy, John T. "OSPF. The anatomy of an Internet routing protocol" Addison Wesley. Reading, MA. 1998. Halabi has also written a good guide to OSPF routing design, but this appears to have been dropped from the Cisco web site. For BGP: Halabi, Bassam "Internet routing architectures" Cisco Press (New Riders Publishing). Indianapolis, IN. 1997. also Cisco Systems Using the Border Gateway Protocol for interdomain routing Although the examples are Cisco-specific, they are remarkably similar to the configuration language in Zebra :-) 17. Further reading http://snafu.freedom.org/linux2.2/iproute-notes.html Contains lots of technical information, comments from the kernel http://www.davin.ottawa.on.ca/ols/ Slides by Jamal Hadi, one of the authors of Linux traffic control http://defiant.coinet.com/iproute2/ip-cref/ HTML version of Alexeys LaTeX documentation - explains part of iproute2 in great detail http://www.aciri.org/floyd/cbq.html Sally Floyd has a good page on CBQ, including her original papers. None of it is Linux specific, but it does a fair job discussing the theory and uses of CBQ. Very technical stuff, but good reading for those so inclined. http://ceti.pl/%7ekravietz/cbq/NET4_tc.html Yet another HOWTO, this time in Polish! You can copy/paste command lines however, they work just the same in every language. The author is cooperating with us and may soon author sections of this HOWTO. Differentiated Services on Linux Discussion on how to use Linux in a diffserv compliant environment. Pretty far removed from your everyday routing needs, but very interesting none the less. We may include a section on this at a later date. IOS Committed Access Rate >From the helpful folks of Cisco who have the laudable habit of putting their documentation online. Cisco syntax is different but the concepts are the same, except that we can do more and do it without routers the price of cars :-) TCP/IP Illustrated, volume 1, W. Richard Stevens, ISBN 0-201-63346-9 Required reading if you truly want to understand TCP/IP. Entertaining as well. 18. Acknowledgements It is our goal to list everybody who has contributed to this HOWTO, or helped us demistify how things work. While there are currently no plans for a Netfilter type scoreboard, we do like to recognise the people who are helping. · Jamal Hadi · Nadeem Hasan · Jason Lunz · Alexey Mahotkin · Pawel Krawczyk · Wim van der Most · Glen Turner · Song Wang Brief Introduction to Alpha Systems and Processors Neal Crook, Digital Equipment (Editor: David Mosberger ) V0.11, 6 June 1997 This document is a brief overview of existing Alpha CPUs, chipsets and systems. It has something of a hardware bias, reflecting my own area of expertese. Although I am an employee of Digital Equipment Corpora- tion, this is not an official statement by Digital and any opinions expressed are mine and not Digital's. ______________________________________________________________________ Table of Contents 1. What is Alpha 2. What is Digital Semiconductor 3. Alpha CPUs 4. 21064 performance vs 21066 performance 5. A Few Notes On Clocking 6. The chip-sets 7. The Systems 8. Bytes and all that stuff 9. PALcode and all that stuff 10. Porting 11. More Information 12. References ______________________________________________________________________ 1. What is Alpha "Alpha" is the name given to Digital's 64-bit RISC architecture. The Alpha project in Digital began in mid-1989, with the goal of providing a high-performance migration path for VAX customers. This was not the first RISC architecture to be produced by Digital, but it was the first to reach the market. When Digital announced Alpha, in March 1992, it made the decision to enter the merchant semicondutor market by selling Alpha microprocessors. Alpha is also sometimes referred to as Alpha AXP, for obscure and arcane reasons that aren't worth persuing. Suffice it to say that they are one and the same. 2. What is Digital Semiconductor Digital Semiconductor (DS) is the business unit within Digital Equipment Corporation (Digital - we don't like the name DEC) that sells semiconductors on the merchant market. Digital's products include CPUs, support chipsets, PCI-PCI bridges and PCI peripheral chips for comms and multimedia. 3. Alpha CPUs There are currently 2 generations of CPU core that implement the Alpha architecture: o EV4 o EV5 Opinions differ as to what "EV" stands for (Editor's note: the true answer is of course "Electro Vlassic" ``[1]''), but the number represents the first generation of Digital's CMOS technology that the core was implemented in. So, the EV4 was originally implemented in CMOS4. As time goes by, a CPU tends to get a mid-life performance kick by being optically shrunk into the next generation of CMOS process. EV45, then, is the EV4 core implemented in CMOS5 process. There is a big difference between shrinking a design into a particular technology and implementing it from scratch in that technology (but I don't want to go into that now). There are a few other wildcards in here: there is also a CMOS4S (optical shrink in CMOS4) and a CMOS5L. True technophiles will be interested to know that CMOS4 is a 0.75 micron process, CMOS5 is a 0.5 micron process and CMOS6 is a 0.35 micron process. To map these CPU cores to chips we get: 21064-150,166 EV4 (originally), EV4S (now) 21064-200 EV4S 21064A-233,275,300 EV45 21066 LCA4S (EV4 core, with EV4 FPU) 21066A-233 LCA45 (EV4 core, but with EV45 FPU) 21164-233,300,333 EV5 21164A-417 EV56 21264 EV6 The EV4 core is a dual-issue (it can issue 2 instructions per CPU clock) superpipelined core with integer unit, floating point unit and branch prediction. It is fully bypassed and has 64-bit internal data paths and tightly coupled 8Kbyte caches, one each for Instruction and Data. The caches are write-through (they never get dirty). The EV45 core has a couple of tweaks to the EV4 core: it has a slightly improved floating point unit, and 16KB caches, one each for Instruction and Data (it also has cache parity). (Editor's note: Neal Crook indicated in a separate mail that the changes to the floating point unit (FPU) improve the performance of the divider. The EV4 FPU divider takes 34 cycles for a single-precision divide and 63 cycles for a double-precision divide (non data-dependent). In constrast, the EV45 divider takes typically 19 cycles (34 cycles max) for single- precision and typically 29 cycles (63 cycles max) for a double- precision division (data-dependent).) The EV5 core is a quad-issue core, also superpipelined, fully bypassed etc etc. It has tightly-coupled 8Kbyte caches, one each for I and D. These caches are write-through. It also has a tightly-coupled 96Kbyte on-chip second-level cache (the Scache) which is 3-way set associative and write-back (it can be dirty). The EV4->EV5 performance increase is better than just the increase achieved by clock speed improvements. As well as the bigger caches and quad issue, there are microarchitectural improvements to reduce producer/consumer latencies in some paths. The EV56 core is fundamentally the same microarchitecture as the EV5, but it adds some new instructions for 8 and 16-bit loads and stores (see Section ``Bytes and all that stuff''). These are primarily intended for use by device drivers. The EV56 core is implemented in CMOS6, which is a 2.0V process. The 21064 was anounced in March 1992. It uses the EV4 core, with a 128-bit bus interface. The bus interface supports the 'easy' connection of an external second-level cache, with a block size of 256-bits (2 data beats on the bus). The Bcache timing is completely software configurable. The 21064 can also be configured to use a 64-bit external bus, (but I'm not sure if any shipping system uses this mode). The 21064 does not impose any policy on the Bcache, but it is usually configured as a write-back cache. The 21064 does contain hooks to allow external hardware to maintain cache coherence with the Bcache and internal caches, but this is hairy. The 21066 uses the EV4 core and integrates a memory controller and PCI host bridge. To save pins, the memory controller has a 64-bit data bus (but the internal caches have a block size of 256 bits, just like the 21064, therefore a block fill takes 4 beats on the bus). The memory controller supports an external Bcache and external DRAMs. The timing of the Bcache and DRAMs is completely software configurable, and can be controlled to the resolution of the CPU clock period. Having a 4-beat process to fill a cache block isn't as bad as it sounds because the DRAM access is done in page mode. Unfortunately, the memory controller doesn't support any of the new esoteric DRAMs (SDRAM, EDO or BEDO) or synchronous cache RAMs. The PCI bus interface is fully rev2.0 compliant and runs at upto 33MHz. The 21164 has a 128-bit data bus and supports split reads, with upto 2 reads outstanding at any time (this allows 100% data bus utilisation under best-case dream-on conditions, i.e., you can theoretically transfer 128-bits of data on every bus clock). The 21164 supports easy connection of an external 3-rd level cache (Bcache) and has all the hooks to allow external systems to maintain full cache coherence with all caches. Therefore, symmetric multiprocessor designs are 'easy'. The 21164A was announced in October, 1995. It uses the EV56 core. It is nominally pin-compatible with the 21164, but requires split power rails; all of the power pins that were +3.3V power on the 21164 have now been split into two groups; one group provided 2.0V power to the CPU core, the other group supplies 3.3V to the I/O cells. Unlike older implementations, the 21164 pins are not 5V-tolerant. The end result of this change is that 21164 systems are, in general, not upgradeable to the 21164A (though note that it would be relatively straightforward to design a 21164A system that could also accommodate a 21164). The 21164A also has a couple of new pins to support the new 8 and 16-bit loads and stores. It also improves the 21164 support for using synchronus SRAMs to implement the external Bcache. 4. 21064 performance vs 21066 performance The 21064 and the 21066 have the same (EV4) CPU core. If the same program is run on a 21064 and a 21066, at the same CPU speed, then the difference in performance comes only as a result of system Bcache/memory bandwidth. Any code thread that has a high hit-rate on the internal caches will perform the same. There are 2 big performance killers: 1. Code that is write-intensive. Even though the 21064 and the 21066 have write buffers to swallow some of the delays, code that is write-intensive will be throttled by write bandwidth at the system bus. This arises because the on-chip caches are write-through. 2. Code that wants to treat floats as integers. The Alpha architecture does not allow register-register transfers from integer registers to floating point registers. Such a conversion has to be done via memory (And therefore, because the on-chip caches are write- through, via the Bcache). (Editor's note: it seems that both the EV4 and EV45 can perform the conversion through the primary data cache (Dcache), provided that the memory is cached already. In such a case, the store in the conversion sequence will update the Dcache and the subsequent load is, under certain circumstances, able to read the updated d-cache value, thus avoiding a costly roundtrip to the Bcache. In particular, it seems best to execute the stq/ldt or stt/ldq instructions back-to-back, which is somewhat counter-intuitive.) If you make the same comparison between a 21064A and a 21066A, there is an additional factor due to the different Icache and Dcache sizes between the two chips. Now, the 21164 solves both these problems: it achieve much higher system bus bandwidths (despite having the same number of signal pins - yes, I know it's got about twice as many pins as a 21064, but all those extra ones are power and ground! (yes, really!!)) and it has write-back caches. The only remaining problem is the answer to the question "how much does it cost?" 5. A Few Notes On Clocking All of the current Alpha CPUs use high-speed clocks, because their microarchitectures have been designed as so-called short-tick designs. None of the sytem busses have to run at horrendous speeds as a result though: o on the 21066(A), 21064(A), 21164 the off-chip cache (Bcache) timing is completely programmable, to the resolution of the CPU clock. For example, on a 275MHz CPU, the Bcache read access time can be controller with a resolution of 3.6ns o on the 21066(A), the DRAM timing is completely programmable, to the resolution of the CPU clock (not the PCI clock, the CPU clock). o on the 21064(A), 21164(A), the system bus frequency is a sub- multiple of the CPU clock frequency. Most of the 21064 motherboards use a 33MHz system bus clock. o Systems that use the 21066 can run the PCI at any frequency relative to the CPU. Generally, the PCI runs at 33MHz. o Systems that use the APECs chipset (see Section ``'') always have their CPU system bus equal to their PCI bus frequency. This means that both busses tends to run at either 25MHz or 33MHz (since these are the frequencies that scale up to match the CPU frequencies). On APECs systems, the DRAM controller timings are software programmable in terms of the CPU system bus frequency Aside: someone suggested that they were getting bad performance on a 21066 because the 21066 memory controller was only running at 33MHz. Actually, it's the superfast 21064A systems that have memory controllers that 'only' run at 33MHz. 6. The chip-sets DS sells two CPU support chipsets. The 2107x chipset (aka APECS) is a 21064(A) support chiset. The 2117x chipset (aka ALCOR) is a 21164 support chipset. There will also be 2117xA chipset (aka ALCOR 2) as a 21164A support chipset. Both chipsets provide memory controllers and PCI host bridges for their CPU. APECS provides a 32-bit PCI host bridge, ALCOR provides a 64-bit PCI host bridge which (in accordance with the requirements of the PCI spec) can support both 32-bit and 64-bit PCI devices. APECS consists of 6, 208-pin chips (4, 32-bit data slices (DECADE), 1 system controller (COMANCHE), 1 PCI controller (EPIC)). It provides a DRAM controller (128-bit memory bus) and a PCI interface. It also does all the work to maintain memory coherence when a PCI device DMAs into (or out of) memory. ALCOR consists of 5 chips (4, 64-bit data slices (Data Switch, DSW) - 208-pin PQFP and 1 control (Control, I/O Address, CIA) - a 383 pin plastic PGA). It provides a DRAM controller (256-bit memory bus) and a PCI interface. It also does all the work required to support an external Bcache and to maintain memory coherence when a PCI device DMAs into (or out of) memory. There is no support chipset for the 21066, since the memory controller and PCI host bridge functionality are integrated onto the chip. 7. The Systems The applications engineering group in DS produces example designs using the CPUs and support chipsets. These are typically PC-AT size motherboards, with all the functionality that you'd typically find on a high-end Pentium motherboard. Originally, these example designs were intended to be used as starting points for third-parties to produce motherboard designs from. These first-generation designs were called Evaluation Boards (EBs). As the amount of engineering required to build a motherboard has increased (due to higher-speed clocks and the need to meet RF emission and susceptibility regulations) the emphasis has shifted towards providing motherboards that are suitable for volume manufacture. Digital's system groups have produced several generations of machines using Alpha processors. Some of these systems use support logic that is designed by the systems groups, and some use commodity chipsets from DS. In some cases, systems use a combination of both. Various third-parties build systems using Alpha processors. Some of these companies design systems from scratch, and others use DS support chipsets, clone/modify DS example designs or simply package systems using build and tested boards from DS. The EB64: Obsolete design using 21064 with memory controller implemented using programmable logic. I/O provided by using programmable logic to interface a 486<->ISA bridge chip. On-board Ethernet, SuperI/O (2S, 1P, FD), Ethernet and ISA. PC-AT size. Runs from standard PC power supply. The EB64+: Uses 21064 or 21064A and APECs. Has ISA and PCI expansion (3 ISA, 2 PCI, one pair are on a shared slot). Supports 36-bit DRAM SIMs. ISA bus generated by Intel SaturnI/O PCI-ISA bridge. On-board SCSI (NCR 810 on PCI) Ethernet (Digital 21040), KBD, MOUSE (PS2 style), SuperI/O (2S, 1P, FD), RTC/NVRAM. Boot ROM is EPROM. PC-AT size. Runs from standard PC power supply. The EB66: Uses 21066 or 21066A. I/O sub-system is identical to EB64+. Baby PC-AT size. Runs from standard PC power supply. The EB66 schematic was published as a marketing poster advertising the 21066 as "the first microprocessor in the world with embedded PCI" (for trivia fans: there are actually 2 versions of this poster - I drew the circuits and wrote the spiel for the first version, and some Americans mauled the spiel for the second version) The EB164: Uses 21164 and ALCOR. Has ISA and PCI expansion (3 ISA slots, 2 64-bit PCI slots (one is shared with an ISA slot) and 2 32-bit PCI slots. Uses plus-in Bcache SIMMs. I/O sub-system provides SuperI/O (2S, 1P, FD), KBD, MOUSE (PS2 style), RTC/NVRAM. Boot ROM is Flash. PC-AT-sized motherboard. Requires power supply with 3.3V output. The AlphaPC64 (aka Cabriolet): derived from EB64+ but now baby-AT with Flash boot ROM, no on-board SCSI or Ethernet. 3 ISA slots, 4 PCI slots (one pair are on a shared slot), uses plug-in Bcache SIMMs. Requires power supply with 3.3V output. The AXPpci33 (aka NoName), is based on the EB66. This design is produced by Digital's Technical OEM (TOEM) group. It uses the 21066 processor running at 166MHz or 233MHz. It is a baby-AT size, and runs from a standard PC power supply. It has 5 ISA slots and 3 PCI slots (one pair are a shared slot). There are 2 versions, with either PS/2 or large DIN connectors for the keyboard. Other 21066-based motherboards: most if not all other 21066-based motherboards on the market are also based on EB66 - there's really not many system options when designing a 21066 system, because all the control is done on-chip. Multia (aka the Universal Desktop Box): This is a very compact pedestal desktop system based on the 21066. It includes 2 PCMCIA sockets, 21030 (TGA) graphics, 21040 Ethernet and NCR 810 SCSI disk along with floppy, 2 serial ports and a parallel port. It has limited expansion capability (one PCI slot) due to its compact size. (There is some restriction on when you can use the PCI slot, can't remember what) (Note that 21066A-based and Pentium-based Multia's are also available). DEC PC 150 AXP (aka Jensen): This is a very old Digital system - one of the first-generation Alpha systems. It is only mentioned here because a number of these systems seem to be available on the second- hand market. The Jensen is a floor-standing tower system which used a 150MHz 21064 (later versions used faster CPUs but I'm not sure what speeds). It used programmable logic to interface a 486 EISA I/O bridge to the CPU. Other 21064(A) systems: There are 3 or 4 motherboard designs around (I'm not including Digital systems here) and all the ones I know of are derived from the EB64+ design. These include: o EB64+ (some vendors package the board and sell it unmodified); AT form-factor. o Aspen Systems motherboard: EB64+ derivative; baby-AT form-factor. o Aspen Systems server board: many PCI slots (includes PCI bridge). o AlphaPC64 (aka Cabriolet), baby AT form-factor. Other 21164(A) systems: The only one I'm aware of that isn't simply an EB164 clone is a system made by DeskStation. That system is implemented using a memory and I/O controller proprietary to Desk Station. I don't know what their attitude towards Linux is. 8. Bytes and all that stuff When the Alpha architecture was introduced, it was unique amongst RISC architectures for eschewing 8-bit and 16-bit loads and stores. It supported 32-bit and 64-bit loads and stores (longword and quadword, in Digital's nomenclature). The co-architects (Dick Sites, Rich Witek) justified this decision by citing the advantages: 1. Byte support in the cache and memory sub-system tends to slow down accesses for 32-bit and 64-bit quantities. 2. Byte support makes it hard to build high-speed error-correction circuitry into the cache/memory sub-system. Alpha compensates by providing powerful instructions for manipulating bytes and byte groups within 64-bit registers. Standard benchmarks for string operations (e.g., some of the Byte benchmarks) show that Alpha performs very well on byte manipulation. The absence of byte loads and stores impacts some software semaphores and impacts the design of I/O sub-systems. Digital's solution to the I/O problem is to use some low-order address lines to specify the data size during I/O transfers, and to decode these as byte enables. This so-called Sparse Addressing wastes address space and has the consequence that I/O space is non-contiguous (more on the intricacies of Sparse Addressing when I get around to writing it). Note that I/O space, in this context, refers to all system resources present on the PCI and therefore includes both PCI memory space and PCI I/O space. With the 21164A introduction, the Alpha archtecture was ECO'd to include byte addressing. Executing these new instructions on an earlier CPU will cause an OPCDEC PALcode exception, so that the PALcode will handle the access. This will have a performance impact. The ramifications of this are that use of these new instructions (IMO) should be restricted to device drivers rather than applications code. These new byte load and stores mean that future support chipsets will be able to support contiguous I/O space. 9. PALcode and all that stuff This is a placeholder for a section explaining PALcode. I will write it if there is sufficient interest. 10. Porting The ability of any Alpha-based machine to run Linux is really only limited by your ability to get information on the gory details of its innards. Since there are Linux ports for the E66, EB64+ and EB164 boards, all systems based on the 21066, 21064/APECS or 21164/ALCOR should run Linux with little or no modification. The major thing that is different between any of these motherboards is the way that they route interrupts. There are three sources of interrupts: o on-board devices o PCI devices o ISA devices All the systems use an Intel System I/O bridge (SIO) to act as a bridge between PCI and ISA (the main I/O bus is PCI, the ISA bus is a secondary bus used to support slow-speed and 'legacy' I/O devices). The SIO contains the traditional pair of daisy-chained 8259s. Some systems (e.g., the Noname) route all of their interrupts through the SIO and thence to the CPU. Some systems have a separate interrupt controller and route all PCI interrupts plus the SIO interrupt (8259 output) through that, and all ISA interrupts through the SIO. Other differences between the systems include: o how many slots they have o what on-board PCI devices they have o whether they have Flash or EPROM 11. More Information All of the DS evaluation boards and motherboard designs are license- free and the whole documentation kit for a design costs about \$50. That includes all the schematics, programmable parts sources, data sheets for CPU and support chipset. The doc kits are available from Digital Semiconductor distributors. I'm not suggesting that many people will want to rush out and buy this, but I do want to point out that the information is available. Hope that was helpful. Comments/updates/suggestions for expansion to Neal Crook . 12. References [1] Bill Hamburgen, Jeff Mogul, Brian Reid, Alan Eustace, Richard Swan, Mary Jo Doherty, and Joel Bartlett. Characterization of Organic Illumination Systems. DEC WRL, Technical Note 13, April 1989. Linux Assembly HOWTO Konstantin Boldyshev and Francois-Rene Rideau v0.5k, July 11, 2000 This is the Linux Assembly HOWTO. This document describes how to pro­ gram in assembly language using FREE programming tools, focusing on development for or from the Linux Operating System, mostly on IA-32 (i386) platform. Included material may or may not be applicable to other hardware and/or software platforms. Contributions about them are gladly accepted. Keywords: assembly, assembler, asm, inline asm, macroprocessor, preprocessor, 32-bit, IA-32, i386, x86, nasm, gas, as86, OS, kernel, system, libc, system call, interrupt, small, fast, embedded, hardware, port ______________________________________________________________________ Table of Contents 1. INTRODUCTION 1.1 Legal Blurb 1.2 Foreword 1.3 Contributions 1.4 Credits 1.5 History 2. DO YOU NEED ASSEMBLY? 2.1 Pros and Cons 2.1.1 The advantages of Assembly 2.1.2 The disadvantages of Assembly 2.1.3 Assessment 2.2 How to NOT use Assembly 2.2.1 General procedure to achieve efficient code 2.2.2 Languages with optimizing compilers 2.2.3 General procedure to speed your code up 2.2.4 Inspecting compiler-generated code 2.3 Linux and assembly 3. ASSEMBLERS 3.1 GCC Inline Assembly 3.1.1 Where to find GCC 3.1.2 Where to find docs for GCC Inline Asm 3.1.3 Invoking GCC to build proper inline assembly code 3.2 GAS 3.2.1 Where to find it 3.2.2 What is this AT&T syntax 3.2.3 16-bit mode 3.2.4 GASP 3.3 NASM 3.3.1 Where to find NASM 3.3.2 What it does 3.4 AS86 3.4.1 Where to get AS86 3.4.2 How to invoke the assembler? 3.4.3 Where to find docs 3.4.4 What if I can't compile Linux anymore with this new version ? 3.5 OTHER ASSEMBLERS 3.5.1 Win32Forth assembler 3.5.2 Terse 3.5.3 HLA 3.5.4 TALC 3.5.5 Non-free and/or Non-32bit x86 assemblers. 4. METAPROGRAMMING/MACROPROCESSING 4.1 What's integrated into the above 4.1.1 GCC 4.1.2 GAS 4.1.3 GASP 4.1.4 NASM 4.1.5 AS86 4.1.6 OTHER ASSEMBLERS 4.2 External Filters 4.2.1 CPP 4.2.2 M4 4.2.3 Macroprocessing with your own filter 4.2.4 Metaprogramming 4.2.4.1 Backends from compilers 4.2.4.2 The New-Jersey Machine-Code Toolkit 4.2.4.3 TUNES 5. CALLING CONVENTIONS 5.1 Linux 5.1.1 Linking to GCC 5.1.2 ELF vs a.out problems 5.1.3 Direct Linux syscalls 5.1.4 Hardware I/O under Linux 5.1.5 Accessing 16-bit drivers from Linux/i386 5.2 DOS 5.3 Windows and Co. 5.4 Your own OS 6. QUICK START 6.1 Tools you need 6.2 Hello, world! 6.2.1 NASM (hello.asm) 6.2.2 GAS (hello.S) 6.3 Producing object code 6.4 Producing executable 7. RESOURCES 7.1 Mailing list 7.2 Frequently asked questions (with answers) 7.2.1 How do I do graphics programming in Linux? 7.2.2 How do I debug pure assembly code under Linux? 7.2.3 Any other useful debugging tools? 7.2.4 How do I access BIOS functions from Linux (BSD, BeOS, etc)? ______________________________________________________________________ 1. INTRODUCTION You can skip this section if you are familiar with HOWTOs, or just hate to read all this assembly-nonrelated crap. 1.1. Legal Blurb Copyright © 1999-2000 Konstantin Boldyshev. Copyright © 1996-1999 Francois-Rene Rideau. This document may be distributed only subject to the terms and conditions set forth in the LDP License . It may be reproduced and distributed in whole or in part, in any medium physical or electronic, provided that this license notice is displayed in the reproduction. Commercial redistribution is permitted and encouraged. All modified documents, including translations, anthologies, and partial documents, must meet the following requirements: · The modified version must be labeled as such · The person making the modifications must be identified · Acknowledgement of the original author must be retained · The location of the original unmodified document be identified · The original author's (or authors') name(s) may not be used to assert or imply endorsement of the resulting document without the original author's (or authors') permission The most recent official version of this document is available from Linux Assembly and LDP sites. If you are reading a few-months-old copy, consider checking urls above for a new version. 1.2. Foreword This document aims answering questions of those who program or want to program 32-bit x86 assembly using free software , particularly under the Linux operating system. At many places, Universal Resource Locators (URL) are given for some software or documentation repository. This document also points to other documents about non-free, non-x86, or non-32-bit assemblers, although this is not its primary goal. Also note that there are FAQs and docs about programming on your favorite platform (whatever it is), which you should consult for platform- specific issues, not related directly to assembly programming. Because the main interest of assembly programming is to build the guts of operating systems, interpreters, compilers, and games, where C compiler fails to provide the needed expressiveness (performance is more and more seldom as issue), we are focusing on development of such kind of software. If you don't know what free software is, please do read carefully the GNU General Public License, which is used in a lot of free software, and is the model for most of their licenses. It generally comes in a file named COPYING (or COPYING.LIB). Literature from the FSF (free software foundation) might help you, too. Particularly, the interesting feature of free software is that it comes with sources that you can consult and correct, or sometimes even borrow from. Read your particular license carefully and do comply to it. 1.3. Contributions This is an interactively evolving document: you are especially invited to ask questions, to answer questions, to correct given answers, to give pointers to new software, to point the current maintainer to bugs or deficiencies in the pages. In one word, contribute! To contribute, please contact the Assembly-HOWTO maintainer. At the time of this writing, it is Konstantin Boldyshev and no more Francois-Rene Rideau . I (Fare) had been looking for some time for a serious hacker to replace me as maintainer of this document, and am pleased to announce Konstantin as my worthy successor. 1.4. Credits I would like to thank following persons, by order of appearance: · Linus Torvalds for Linux · Bruce Evans for bcc from which as86 is extracted · Simon Tatham and Julian Hall for NASM · Greg Hankins and now Tim Bynum for maintaining HOWTOs · Raymond Moon for his FAQ · Eric Dumas for his translation of the mini-HOWTO into French (sad thing for the original author to be French and write in English) · Paul Anderson and Rahim Azizarab for helping me, if not for taking over the HOWTO. · Marc Lehman for his insight on GCC invocation. · Abhijit Menon-Sen for helping me figure out the argument passing convention · All the people who have contributed ideas, answers, remarks, and moral support. 1.5. History Each version includes a few fixes and minor corrections, that need not to be repeatedly mentioned every time. Version 0.5k 11 Jul 2000 Few additions to FAQ Version 0.5j 14 Jun 2000 Complete rearrangement of INTRODUCTION and RESOURCES; FAQ added to RESOURCES, misc cleanups and additions (and more to come) Version 0.5i 04 May 2000 Added HLA, TALC; rearrangements in RESOURCES, QUICK START, ASSEMBLERS; few new pointers Version 0.5h 09 Apr 2000 finally managed to state LDP license on document, new resources added, misc fixes Version 0.5g 26 Mar 2000 new resources on different CPUs Version 0.5f 02 Mar 2000 new resources, misc corrections Version 0.5e 10 Feb 2000 url updates, changes in GAS example Version 0.5d 01 Feb 2000 RESOURCES (former POINTERS) section completely redone, various url updates. Version 0.5c 05 Dec 1999 New pointers, updates and some rearrangements. Rewrite of sgml source. Version 0.5b 19 Sep 1999 Discussion about libc or not libc continues. New web pointers and and overall updates. Version 0.5a 01 Aug 1999 "QUICK START" section rearranged, added GAS example. Several new web pointers. Version 0.5 25 July 1999 GAS has 16-bit mode. New maintainer (at last): Konstantin Boldyshev. Discussion about libc or not libc. Added section "QUICK START" with examples of using assembly. Version 0.4q 22 June 1999 process argument passing (argc,argv,environ) in assembly. This is yet another "last release by Fare before new maintainer takes over". Nobody knows who might be the new maintainer. Version 0.4p 6 June 1999 clean up and updates. Version 0.4o 1 December 1998 * Version 0.4m 23 March 1998 corrections about gcc invocation Version 0.4l 16 November 1997 release for LSL 6th edition. Version 0.4k 19 October 1997 * Version 0.4j 7 September 1997 * Version 0.4i 17 July 1997 info on 16-bit mode access from Linux. Version 0.4h 19 Jun 1997 still more on "how not to use assembly"; updates on NASM, GAS. Version 0.4g 30 Mar 1997 * Version 0.4f 20 Mar 1997 * Version 0.4e 13 Mar 1997 Release for DrLinux Version 0.4d 28 Feb 1997 Vapor announce of a new Assembly-HOWTO maintainer. Version 0.4c 9 Feb 1997 Added section "DO YOU NEED ASSEMBLY?" Version 0.4b 3 Feb 1997 NASM moved: now is before AS86 Version 0.4a 20 Jan 1997 CREDITS section added Version 0.4 20 Jan 1997 first release of the HOWTO as such. Version 0.4pre1 13 Jan 1997 text mini-HOWTO transformed into a full linuxdoc-sgml HOWTO, to see what the SGML tools are like. Version 0.3l 11 Jan 1997 * Version 0.3k 19 Dec 1996 What? I had forgotten to point to terse??? Version 0.3j 24 Nov 1996 point to French translated version Version 0.3i 16 Nov 1996 NASM is getting pretty slick Version 0.3h 6 Nov 1996 more about cross-compiling -- See on sunsite: devel/msdos/ Version 0.3g 2 Nov 1996 Created the History. Added pointers in cross-compiling section. Added section about I/O programming under Linux (particularly video). Version 0.3f 17 Oct 1996 * Version 0.3c 15 Jun 1996 * Version 0.2 04 May 1996 * Version 0.1 23 Apr 1996 Francois-Rene "Fare" Rideau creates and publishes the first mini-HOWTO, because "I'm sick of answering ever the same questions on comp.lang.asm.x86" 2. DO YOU NEED ASSEMBLY? Well, I wouldn't want to interfere with what you're doing, but here is some advice from hard-earned experience. 2.1. Pros and Cons 2.1.1. The advantages of Assembly Assembly can express very low-level things: · you can access machine-dependent registers and I/O. · you can control the exact behavior of code in critical sections that might otherwise involve deadlock between multiple software threads or hardware devices. · you can break the conventions of your usual compiler, which might allow some optimizations (like temporarily breaking rules about memory allocation, threading, calling conventions, etc). · you can build interfaces between code fragments using incompatible such conventions (e.g. produced by different compilers, or separated by a low-level interface). · you can get access to unusual programming modes of your processor (e.g. 16 bit mode to interface startup, firmware, or legacy code on Intel PCs) · you can produce reasonably fast code for tight loops to cope with a bad non-optimizing compiler (but then, there are free optimizing compilers available!) · you can produce hand-optimized code perfectly tuned for your particular hardware setup, though not to anyone else's. · you can write some code for your new language's optimizing compiler (that's something few will ever do, and even they, not often). 2.1.2. The disadvantages of Assembly Assembly is a very low-level language (the lowest above hand-coding the binary instruction patterns). This means · it's long and tedious to write initially, · it's quite bug-prone, · your bugs can be very difficult to chase, · it's very difficult to understand and modify, i.e. to maintain. · the result is very non-portable to other architectures, existing or future, · your code will be optimized only for a certain implementation of a same architecture: for instance, among Intel-compatible platforms, each CPU design and its variations (relative latency, throughput, and capacity, of processing units, caches, RAM, bus, disks, presence of FPU, MMX, 3DNOW, SIMD extensions, etc) implies potentially completely different optimization techniques. CPU designs already include: Intel 386, 486, Pentium, PPro, Pentium II, Pentium III; Cyrix 5x86, 6x86; AMD K5, K6 (K6-2, K6-III), K7 (Athlon). New designs keep popping up, so don't expect either this listing or your code to be up-to-date. · you spend more time on a few details, and can't focus on small and large algorithmic design, that are known to bring the largest part of the speed up. [e.g. you might spend some time building very fast list/array manipulation primitives in assembly; only a hash table would have sped up your program much more; or, in another context, a binary tree; or some high-level structure distributed over a cluster of CPUs] · a small change in algorithmic design might completely invalidate all your existing assembly code. So that either you're ready (and able) to rewrite it all, or you're tied to a particular algorithmic design; · On code that ain't too far from what's in standard benchmarks, commercial optimizing compilers outperform hand-coded assembly (well, that's less true on the x86 architecture than on RISC architectures, and perhaps less true for widely available/free compilers; anyway, for typical C code, GCC is fairly good); · And in any case, as says moderator John Levine on comp.compilers , "compilers make it a lot easier to use complex data structures, and compilers don't get bored halfway through and generate reliably pretty good code." They will also correctly propagate code transformations throughout the whole (huge) program when optimizing code between procedures and module boundaries. 2.1.3. Assessment All in all, you might find that though using assembly is sometimes needed, and might even be useful in a few cases where it is not, you'll want to: · minimize the use of assembly code, · encapsulate this code in well-defined interfaces · have your assembly code automatically generated from patterns expressed in a higher-level language than assembly (e.g. GCC inline assembly macros). · have automatic tools translate these programs into assembly code · have this code be optimized if possible · All of the above, i.e. write (an extension to) an optimizing compiler back-end. Even in cases when assembly is needed (e.g. OS development), you'll find that not so much of it is, and that the above principles hold. See the Linux kernel sources concerning this: as little assembly as needed, resulting in a fast, reliable, portable, maintainable OS. Even a successful game like DOOM was almost massively written in C, with a tiny part only being written in assembly for speed up. 2.2. How to NOT use Assembly 2.2.1. General procedure to achieve efficient code As says Charles Fiterman on comp.compilers about human vs computer-generated assembly code, " The human should always win and here is why. · First the human writes the whole thing in a high level language. · Second he profiles it to find the hot spots where it spends its time. · Third he has the compiler produce assembly for those small sections of code. · Fourth he hand tunes them looking for tiny improvements over the machine generated code. The human wins because he can use the machine. " 2.2.2. Languages with optimizing compilers Languages like ObjectiveCAML, SML, CommonLISP, Scheme, ADA, Pascal, C, C++, among others, all have free optimizing compilers that will optimize the bulk of your programs, and often do better than hand- coded assembly even for tight loops, while allowing you to focus on higher-level details, and without forbidding you to grab a few percent of extra performance in the above-mentioned way, once you've reached a stable design. Of course, there are also commercial optimizing compilers for most of these languages, too! Some languages have compilers that produce C code, which can be further optimized by a C compiler: LISP, Scheme, Perl, and many other. Speed is fairly good. 2.2.3. General procedure to speed your code up As for speeding code up, you should do it only for parts of a program that a profiling tool has consistently identified as being a performance bottleneck. Hence, if you identify some code portion as being too slow, you should · first try to use a better algorithm; · then try to compile it rather than interpret it; · then try to enable and tweak optimization from your compiler; · then give the compiler hints about how to optimize (typing information in LISP; register usage with GCC; lots of options in most compilers, etc). · then possibly fallback to assembly programming Finally, before you end up writing assembly, you should inspect generated code, to check that the problem really is with bad code generation, as this might really not be the case: compiler-generated code might be better than what you'd have written, particularly on modern multi-pipelined architectures! Slow parts of a program might be intrinsically so. Biggest problems on modern architectures with fast processors are due to delays from memory access, cache-misses, TLB-misses, and page-faults; register optimization becomes useless, and you'll more profitably re-think data structures and threading to achieve better locality in memory access. Perhaps a completely different approach to the problem might help, then. 2.2.4. Inspecting compiler-generated code There are many reasons to inspect compiler-generated assembly code. Here are what you'll do with such code: · check whether generated code can be obviously enhanced with hand- coded assembly (or by tweaking compiler switches) · when that's the case, start from generated code and modify it instead of starting from scratch · more generally, use generated code as stubs to modify, which at least gets right the way your assembly routines interface to the external world · track down bugs in your compiler (hopefully rarer) The standard way to have assembly code be generated is to invoke your compiler with the -S flag. This works with most Unix compilers, including the GNU C Compiler (GCC), but YMMV. As for GCC, it will produce more understandable assembly code with the -fverbose-asm command-line option. Of course, if you want to get good assembly code, don't forget your usual optimization options and hints! 2.3. Linux and assembly In general case you don't need to use assembly language in Linux programming. Unlike DOS, you do not have to write Linux drivers in assembly (well, actually you can do it if you really want). And with modern optimizing compilers, if you care of speed optimization for different CPU's, it's much simpler to write in C. However, if you're reading this, you might have some reason to use assembly instead of C/C++. You may need to use assembly, or you may want to use assembly. Shortly, main practical reasons why you may need to get into Linux assembly are small code and libc independence. Non-practical (and most often) reason is being just an old crazy hacker, who has twenty years old habit of doing everything in assembly language. Also, if you're porting Linux to some embedded hardware you can be quite short at size of whole system: you need to fit kernel, libc and all that stuff of (file|find|text|sh|etc.) utils into several hundreds of kilobytes, and every kilobyte costs much. So, one of the ways you've got is to rewrite some (or all) parts of system in assembly, and this will really save you a lot of space. For instance, a simple httpd written in assembly can take less than 600 bytes; you can fit a webserver, consisting of kernel and httpd, in 400 KB or less... Think about it. 3. ASSEMBLERS 3.1. GCC Inline Assembly The well-known GNU C/C++ Compiler (GCC), an optimizing 32-bit compiler at the heart of the GNU project, supports the x86 architecture quite well, and includes the ability to insert assembly code in C programs, in such a way that register allocation can be either specified or left to GCC. GCC works on most available platforms, notably Linux, *BSD, VSTa, OS/2, *DOS, Win*, etc. 3.1.1. Where to find GCC The original GCC site is the GNU FTP site together with all released application software from the GNU project. Linux-configured and precompiled versions can be found in There are a lot of FTP mirrors of both sites, everywhere around the world, as well as CD-ROM copies. GCC development has split into two branches some time ago (GCC 2.8 and EGCS), but they merged back, and current GCC webpage is . Sources adapted to your favorite OS and precompiled binaries should be found at your usual FTP sites. For most popular DOS port of GCC is named DJGPP, and can be found in directories of such name in FTP sites. See . There are two Win32 GCC ports: cygwin and mingw There is also a port of GCC to OS/2 named EMX, that also works under DOS, and includes lots of unix-emulation library routines. See around the following site: . 3.1.2. Where to find docs for GCC Inline Asm The documentation of GCC includes documentation files in TeXinfo format. You can compile them with TeX and print then result, or convert them to .info, and browse them with emacs, or convert them to .html, or nearly whatever you like; convert (with the right tools) to whatever you like, or just read as is. The .info files are generally found on any good installation for GCC. The right section to look for is C Extensions::Extended Asm:: Section Invoking GCC::Submodel Options::i386 Options:: might help too. Particularly, it gives the i386 specific constraint names for registers: abcdSDB correspond to %eax, %ebx, %ecx, %edx, %esi, %edi and %ebp respectively (no letter for %esp). The DJGPP Games resource (not only for game hackers) had page specifically about assembly, but it's down. Its data have nonetheless been recovered on the DJGPP site , that contains a mine of other useful information: , and in the DJGPP Quick ASM Programming Guide . GCC depends on GAS for assembling, and follow its syntax (see below); do mind that inline asm needs percent characters to be quoted so they be passed to GAS. See the section about GAS below. Find lots of useful examples in the linux/include/asm-i386/ subdirectory of the sources for the Linux kernel. 3.1.3. Invoking GCC to build proper inline assembly code Because assembly routines from the kernel headers (and most likely your own headers, if you try making your assembly programming as clean as it is in the linux kernel) are embedded in extern inline functions, GCC must be invoked with the -O flag (or -O2, -O3, etc), for these routines to be available. If not, your code may compile, but not link properly, since it will be looking for non-inlined extern functions in the libraries against which your program is being linked! Another way is to link against libraries that include fallback versions of the routines. Inline assembly can be disabled with -fno-asm, which will have the compiler die when using extended inline asm syntax, or else generate calls to an external function named asm() that the linker can't resolve. To counter such flag, -fasm restores treatment of the asm keyword. More generally, good compile flags for GCC on the x86 platform are ______________________________________________________________________ gcc -O2 -fomit-frame-pointer -W -Wall ______________________________________________________________________ -O2 is the good optimization level in most cases. Optimizing besides it takes longer, and yields code that is a lot larger, but only a bit faster; such overoptimization might be useful for tight loops only (if any), which you may be doing in assembly anyway. In cases when you need really strong compiler optimization for a few files, do consider using up to -O6. -fomit-frame-pointer allows generated code to skip the stupid frame pointer maintenance, which makes code smaller and faster, and frees a register for further optimizations. It precludes the easy use of debugging tools (gdb), but when you use these, you just don't care about size and speed anymore anyway. -W -Wall enables all warnings and helps you catch obvious stupid errors. You can add some CPU-specific -m486 or such flag so that GCC will produce code that is more adapted to your precise computer. Note that modern GCC has -mpentium and such flags (and PGCC has even more), whereas GCC 2.7.x and older versions do not. A good choice of CPU-specific flags should be in the Linux kernel. Check the TeXinfo documentation of your current GCC installation for more. -m386 will help optimize for size, hence also for speed on computers whose memory is tight and/or loaded, since big programs cause swap, which more than counters any "optimization" intended by the larger code. In such settings, it might be useful to stop using C, and use instead a language that favors code factorization, such as a functional language and/or FORTH, and use a bytecode- or wordcode- based implementation. Note that you can vary code generation flags from file to file, so performance-critical files will use maximum optimization, whereas other files will be optimized for size. To optimize even more, option -mregparm=2 and/or corresponding function attribute might help, but might pose lots of problems when linking to foreign code, including libc. There are ways to correctly declare foreign functions so the right call sequences be generated, or you might want to recompile the foreign libraries to use the same register-based calling convention... Note that you can add make these flags the default by editing file /usr/lib/gcc-lib/i486-linux/2.7.2.3/specs or wherever that is on your system (better not add -W -Wall there, though). The exact location of the GCC specs files on your system can be found by asking gcc -v. 3.2. GAS GAS is the GNU Assembler, that GCC relies upon. 3.2.1. Where to find it Find it at the same place where you found GCC, in a package named binutils. The latest version is available from HJLu at . 3.2.2. What is this AT&T syntax Because GAS was invented to support a 32-bit unix compiler, it uses standard AT&T syntax, which resembles a lot the syntax for standard m68k assemblers, and is standard in the UNIX world. This syntax is no worse, no better than the Intel syntax. It's just different. When you get used to it, you find it much more regular than the Intel syntax, though a bit boring. Here are the major caveats about GAS syntax: · Register names are prefixed with %, so that registers are %eax, %dl and so on, instead of just eax, dl, etc. This makes it possible to include external C symbols directly in assembly source, without any risk of confusion, or any need for ugly underscore prefixes. · The order of operands is source(s) first, and destination last, as opposed to the Intel convention of destination first and sources last. Hence, what in Intel syntax is mov ax,dx (move contents of register dx into register ax) will be in GAS syntax mov %dx, %ax. · The operand length is specified as a suffix to the instruction name. The suffix is b for (8-bit) byte, w for (16-bit) word, and l for (32-bit) long. For instance, the correct syntax for the above instruction would have been movw %dx,%ax. However, gas does not require strict AT&T syntax, so the suffix is optional when length can be guessed from register operands, and else defaults to 32-bit (with a warning). · Immediate operands are marked with a $ prefix, as in addl $5,%eax (add immediate long value 5 to register %eax). · No prefix to an operand indicates it is a memory-address; hence movl $foo,%eax puts the address of variable foo in register %eax, but movl foo,%eax puts the contents of variable foo in register %eax. · Indexing or indirection is done by enclosing the index register or indirection memory cell address in parentheses, as in testb $0x80,17(%ebp) (test the high bit of the byte value at offset 17 from the cell pointed to by %ebp). A program exists to help you convert programs from TASM syntax to AT&T syntax. See . (Since the original x2ftp site is closing (no more?), use a mirror site ). There also exists a program for the reverse conversion: . GAS has comprehensive documentation in TeXinfo format, which comes at least with the source distribution. Browse extracted .info pages with Emacs or whatever. There used to be a file named gas.doc or as.doc around the GAS source package, but it was merged into the TeXinfo docs. Of course, in case of doubt, the ultimate documentation is the sources themselves! A section that will particularly interest you is Machine Dependencies::i386-Dependent:: Again, the sources for Linux (the OS kernel) come in as excellent examples; see under linux/arch/i386/ the following files: kernel/*.S, boot/compressed/*.S, mathemu/*.S. If you are writing kind of a language, a thread package, etc., you might as well see how other languages (OCaml , Gforth , etc.), or thread packages (QuickThreads, MIT pthreads, LinuxThreads, etc), or whatever, do it. Finally, just compiling a C program to assembly might show you the syntax for the kind of instructions you want. See section ``Do you need Assembly?'' above. 3.2.3. 16-bit mode The current stable release of binutils (2.9.1.0.25) now fully supports 16-bit mode (registers and addressing) on i386 PCs. Still with its peculiar AT&T syntax, of course. Use .code16 and .code32 to switch between assembly modes. Also, a neat trick used by some (including the oskit authors) is to have GCC produce code for 16-bit real mode, using an inline assembly statement asm(".code16\n"). GCC will still emit only 32-bit addressing modes, but GAS will insert proper 32-bit prefixes for them. 3.2.4. GASP GASP is the GAS Preprocessor. It adds macros and some nice syntax to GAS. GASP comes together with GAS in the GNU binutils archive. It works as a filter, much like cpp and the like. I have no idea on details, but it comes with its own texinfo documentation, so just browse them (in .info), print them, grok them. GAS with GASP looks like a regular macro-assembler to me. 3.3. NASM The Netwide Assembler project provides cool i386 assembler, written in C, that should be modular enough to eventually support all known syntaxes and object formats. 3.3.1. Where to find NASM Binary release on your usual metalab mirror in devel/lang/asm/. Should also be available as .rpm or .deb in your usual RedHat/Debian distributions' contrib. At the time of writing current version of NASM is 0.98. Note: there's also an extended NASM version available at know as 0.98e. It introduces several serious bugfixes and improvements, so you may want to use it instead of "official" version. 3.3.2. What it does The syntax is Intel-style. Fairly good macroprocessing support is integrated. Supported object file formats are bin, aout, coff, elf, as86, (DOS) obj, win32, (their own format) rdf. NASM can be used as a backend for the free LCC compiler (support files included). Unless you're using BCC as a 16-bit compiler (which is out of scope of this 32-bit HOWTO), you should definitely use NASM instead of say AS86 or MASM, because it is actively supported online, and runs on all platforms. Note: NASM also comes with a disassembler, NDISASM. Its hand-written parser makes it much faster than GAS, though of course, it doesn't support three bazillion different architectures. If you like Intel-style syntax, as opposed to GAS syntax, then it should be the assembler of choice... Note: There are ``converters between GAS AT&T and Intel assembler syntax'', which perform conversion in both directions. 3.4. AS86 AS86 is a 80x86 assembler, both 16-bit and 32-bit, part of Bruce Evans' C Compiler (BCC). It has mostly Intel-syntax, though it differs slightly as for addressing modes. 3.4.1. Where to get AS86 A completely outdated version of AS86 is distributed by HJLu just to compile the Linux kernel, in a package named bin86 (current version 0.4), available in any Linux GCC repository. But I advise no one to use it for anything else but compiling Linux. This version supports only a hacked minix object file format, which is not supported by the GNU binutils or anything, and it has a few bugs in 32-bit mode, so you really should better keep it only for compiling Linux. The most recent versions by Bruce Evans (bde@zeta.org.au) are published together with the FreeBSD distribution. Well, they were: I could not find the sources from distribution 2.1 on :( Hence, I put the sources at my place: The Linux/8086 (aka ELKS) project is somehow maintaining bcc (though I don't think they included the 32-bit patches). See around (or ) and . I haven't followed these developments, and would appreciate a reader contributing on this topic. Among other things, these more recent versions, unlike HJLu's, supports Linux GNU a.out format, so you can link you code to Linux programs, and/or use the usual tools from the GNU binutils package to manipulate your data. This version can co-exist without any harm with the previous one (see according question below). BCC from 12 march 1995 and earlier version has a misfeature that makes all segment pushing/popping 16-bit, which is quite annoying when programming in 32-bit mode. I wrote a patch at a time when the TUNES Project used as86: . Bruce Evans accepted this patch, but since as far as I know he hasn't published a new release of bcc, the ones to ask about integrating it (if not done yet) are the ELKS developers. 3.4.2. How to invoke the assembler? Here's the GNU Makefile entry for using bcc to transform .s asm into both GNU a.out .o object and .l listing: ______________________________________________________________________ %.o %.l: %.s bcc -3 -G -c -A-d -A-l -A$*.l -o $*.o $< ______________________________________________________________________ Remove the %.l, -A-l, and -A$*.l, if you don't want any listing. If you want something else than GNU a.out, you can see the docs of bcc about the other supported formats, and/or use the objcopy utility from the GNU binutils package. 3.4.3. Where to find docs The docs are what is included in the bcc package. I salvaged the man pages that used to be available from the FreeBSD site at . Maybe ELKS developers know better. When in doubt, the sources themselves are often a good docs: it's not very well commented, but the programming style is straightforward. You might try to see how as86 is used in ELKS or Tunes 0.0.0.25... 3.4.4. What if I can't compile Linux anymore with this new version ? Linus is buried alive in mail, and since HJLu (official bin86 maintainer) chose to write hacks around an obsolete version of as86 instead of building clean code around the latest version, I don't think my patch for compiling Linux with a modern as86 has any chance to be accepted if resubmitted. Now, this shouldn't matter: just keep your as86 from the bin86 package in /usr/bin/, and let bcc install the good as86 as /usr/local/libexec/i386/bcc/as where it should be. You never need explicitly call this "good" as86, because bcc does everything right, including conversion to Linux a.out, when invoked with the right options; so assemble files exclusively with bcc as a frontend, not directly with as86. Since GAS now supports 16-bit code, and since H. Peter Anvin, well- known linux hacker, works on NASM, maybe Linux will get rid of AS86, anyway? Who knows! 3.5. OTHER ASSEMBLERS These are other non-regular options, in case the previous didn't satisfy you (why?), that I don't recommend in the usual (?) case, but that could be quite useful if the assembler must be integrated in the software you're designing (i.e. an OS or development environment). 3.5.1. Win32Forth assembler Win32Forth is a free 32-bit ANS FORTH system that successfully runs under Win32s, Win95, Win/NT. It includes a free 32-bit assembler (either prefix or postfix syntax) integrated into the reflective FORTH language. Macro processing is done with the full power of the reflective language FORTH; however, the only supported input and output contexts is Win32For itself (no dumping of .obj file, but you could add that feature yourself, of course). Find it at . 3.5.2. Terse Terse is a programming tool that provides THE most compact assembler syntax for the x86 family! However, it is evil proprietary software. It is said that there was a project for a free clone somewhere, that was abandoned after worthless pretenses that the syntax would be owned by the original author. Thus, if you're looking for a nifty programming project related to assembly hacking, I invite you to develop a terse-syntax frontend to NASM, if you like that syntax. As an interesting historic remark, on comp.compilers , 1999/07/11 19:36:51, the moderator wrote: "There's no reason that assemblers have to have awful syntax. About 30 years ago I used Niklaus Wirth's PL360, which was basically a S/360 assembler with Algol syntax and a a little syntactic sugar like while loops that turned into the obvious branches. It really was an assembler, e.g., you had to write out your expressions with explicit assignments of values to registers, but it was nice. Wirth used it to write Algol W, a small fast Algol subset, which was a predecessor to Pascal. As is so often the case, Algol W was a significant improvement over many of its successors. -John" 3.5.3. HLA HLA is a High Level Assembly language. It uses a high level language like syntax (similar to Pascal, C/C++, and other HLLs) for variable declarations, procedure declarations, and procedure calls. It uses a modified assembly language syntax for the standard machine instructions. It also provides several high level language style control structures (if, while, repeat..until, etc.) that help you write much more readable code. HLA is free, but runs only under Win32. You need MASM and a 32-bit version of MS-link, because HLA produces MASM code and uses MASM for final assembling and linking. However it comes with m2t (MASM to TASM) post-processor program that converts the HLA MASM output to a form that will compile under TASM. Unfortunately, NASM is not supported. 3.5.4. TALC TALC is another free MASM/Win32 based compiler (however it supports ELF output, does it?). TAL stands for Typed Assembly Language. It extends traditional untyped assembly languages with typing annotations, memory management primitives, and a sound set of typing rules, to guarantee the memory safety, control flow safety,and type safety of TAL programs. Moreover, the typing constructs are expressive enough to encode most source language programming features including records and structures, arrays, higher-order and polymorphic functions, exceptions, abstract data types, subtyping, and modules. Just as importantly, TAL is flexible enough to admit many low-level compiler optimizations. Consequently, TAL is an ideal target platform for type-directed compilers that want to produce verifiably safe code for use in secure mobile code applications or extensible operating system kernels. 3.5.5. Non-free and/or Non-32bit x86 assemblers. You may find more about them, together with the basics of x86 assembly programming, in ``Raymond Moon's x86 assembly FAQ''. Note that all DOS-based assemblers should work inside the Linux DOS Emulator, as well as other similar emulators, so that if you already own one, you can still use it inside a real OS. Recent DOS-based assemblers also support COFF and/or other object file formats that are supported by the GNU BFD library, so that you can use them together with your free 32-bit tools, perhaps using GNU objcopy (part of the binutils) as a conversion filter. 4. METAPROGRAMMING/MACROPROCESSING Assembly programming is a bore, but for critical parts of programs. You should use the appropriate tool for the right task, so don't choose assembly when it's not fit; C, OCaml, perl, Scheme, might be a better choice for most of your programming. However, there are cases when these tools do not give a fine enough control on the machine, and assembly is useful or needed. In those case, you'll appreciate a system of macroprocessing and metaprogramming that'll allow recurring patterns to be factored each into a one indefinitely reusable definition, which allows safer programming, automatic propagation of pattern modification, etc. Plain assembler often is not enough, even when one is doing only small routines to link with C. 4.1. What's integrated into the above Yes I know this section does not contain much useful up-to-date information. Feel free to contribute what you discover the hard way... 4.1.1. GCC GCC allows (and requires) you to specify register constraints in your inline assembly code, so the optimizer always know about it; thus, inline assembly code is really made of patterns, not forcibly exact code. Thus, you can make put your assembly into CPP macros, and inline C functions, so anyone can use it in as any C function/macro. Inline functions resemble macros very much, but are sometimes cleaner to use. Beware that in all those cases, code will be duplicated, so only local labels (of 1: style) should be defined in that asm code. However, a macro would allow the name for a non local defined label to be passed as a parameter (or else, you should use additional meta-programming methods). Also, note that propagating inline asm code will spread potential bugs in them; so watch out doubly for register constraints in such inline asm code. Lastly, the C language itself may be considered as a good abstraction to assembly programming, which relieves you from most of the trouble of assembling. 4.1.2. GAS GAS has some macro capability included, as detailed in the texinfo docs. Moreover, while GCC recognizes .s files as raw assembly to send to GAS, it also recognizes .S files as files to pipe through CPP before to feed them to GAS. Again and again, see Linux sources for examples. 4.1.3. GASP It adds all the usual macroassembly tricks to GAS. See its texinfo docs. 4.1.4. NASM NASM has comprehensive macro support, too. See according docs. If you have some bright idea, you might wanna contact the authors, as they are actively developing it. Meanwhile, see about external filters below. 4.1.5. AS86 It has some simple macro support, but I couldn't find docs. Now the sources are very straightforward, so if you're interested, you should understand them easily. If you need more than the basics, you should use an external filter (see below). 4.1.6. OTHER ASSEMBLERS · Win32FORTH: CODE and END-CODE are normal that do not switch from interpretation mode to compilation mode, so you have access to the full power of FORTH while assembling. · TUNES: it doesn't work yet, but the Scheme language is a real high- level language that allows arbitrary meta-programming. 4.2. External Filters Whatever is the macro support from your assembler, or whatever language you use (even C !), if the language is not expressive enough to you, you can have files passed through an external filter with a Makefile rule like that: ______________________________________________________________________ %.s: %.S other_dependencies $(FILTER) $(FILTER_OPTIONS) < $< > $@ ______________________________________________________________________ 4.2.1. CPP CPP is truly not very expressive, but it's enough for easy things, it's standard, and called transparently by GCC. As an example of its limitations, you can't declare objects so that destructors are automatically called at the end of the declaring block; you don't have diversions or scoping, etc. CPP comes with any C compiler. However, considering how mediocre it is, stay away from it if by chance you can make it without C, 4.2.2. M4 M4 gives you the full power of macroprocessing, with a Turing equivalent language, recursion, regular expressions, etc. You can do with it everything that CPP cannot. See macro4th (this4th) or the Tunes 0.0.0.25 sources as examples of advanced macroprogramming using m4. However, its disfunctional quoting and unquoting semantics force you to use explicit continuation-passing tail-recursive macro style if you want to do advanced macro programming (which is remindful of TeX -- BTW, has anyone tried to use TeX as a macroprocessor for anything else than typesetting ?). This is NOT worse than CPP that does not allow quoting and recursion anyway. The right version of m4 to get is GNU m4 1.4 (or later if exists), which has the most features and the least bugs or limitations of all. m4 is designed to be slow for anything but the simplest uses, which might still be ok for most assembly programming (you're not writing million-lines assembly programs, are you?). 4.2.3. Macroprocessing with your own filter You can write your own simple macro-expansion filter with the usual tools: perl, awk, sed, etc. That's quick to do, and you control everything. But of course, any power in macroprocessing must be earned the hard way. 4.2.4. Metaprogramming Instead of using an external filter that expands macros, one way to do things is to write programs that write part or all of other programs. For instance, you could use a program outputting source code · to generate sine/cosine/whatever lookup tables, · to extract a source-form representation of a binary file, · to compile your bitmaps into fast display routines, · to extract documentation, initialization/finalization code, description tables, as well as normal code from the same source files, · to have customized assembly code, generated from a perl/shell/scheme script that does arbitrary processing, · to propagate data defined at one point only into several cross- referencing tables and code chunks. · etc. Think about it! 4.2.4.1. Backends from compilers Compilers like GCC, SML/NJ, Objective CAML, MIT-Scheme, CMUCL, etc, do have their own generic assembler backend, which you might choose to use, if you intend to generate code semi-automatically from the according languages, or from a language you hack: rather than write great assembly code, you may instead modify a compiler so that it dumps great assembly code! 4.2.4.2. The New-Jersey Machine-Code Toolkit There is a project, using the programming language Icon (with an experimental ML version), to build a basis for producing assembly- manipulating code. See around 4.2.4.3. TUNES The TUNES Project for a Free Reflective Computing System is developing its own assembler as an extension to the Scheme language, as part of its development process. It doesn't run at all yet, though help is welcome. The assembler manipulates abstract syntax trees, so it could equally serve as the basis for a assembly syntax translator, a disassembler, a common assembler/compiler back-end, etc. Also, the full power of a real language, Scheme, make it unchallenged as for macroprocessing/metaprogramming. 5. CALLING CONVENTIONS 5.1. Linux 5.1.1. Linking to GCC This is the preferred way if you are developing mixed C-asm project. Check GCC docs and examples from Linux kernel .S files that go through gas (not those that go through as86). 32-bit arguments are pushed down stack in reverse syntactic order (hence accessed/popped in the right order), above the 32-bit near return address. %ebp, %esi, %edi, %ebx are callee-saved, other registers are caller-saved; %eax is to hold the result, or %edx:%eax for 64-bit results. FP stack: I'm not sure, but I think it's result in st(0), whole stack caller-saved. Note that GCC has options to modify the calling conventions by reserving registers, having arguments in registers, not assuming the FPU, etc. Check the i386 .info pages. Beware that you must then declare the cdecl or regparm(0) attribute for a function that will follow standard GCC calling conventions. See in the GCC info pages the section: C Extensions::Extended Asm::. See also how Linux defines its asmlinkage macro... 5.1.2. ELF vs a.out problems Some C compilers prepend an underscore before every symbol, while others do not. Particularly, Linux a.out GCC does such prepending, while Linux ELF GCC does not. If you need cope with both behaviors at once, see how existing packages do. For instance, get an old Linux source tree, the Elk, qthreads, or OCaml... You can also override the implicit C->asm renaming by inserting statements like ______________________________________________________________________ void foo asm("bar") (void); ______________________________________________________________________ to be sure that the C function foo will be called really bar in assem­ bly. Note that the utility objcopy, from the binutils package, should allow you to transform your a.out objects into ELF objects, and perhaps the contrary too, in some cases. More generally, it will do lots of file format conversions. 5.1.3. Direct Linux syscalls Often you will be told that using libc is the only way, and direct system calls are bad. This is true. To some extent. So, you must know that libc is not sacred, and in most cases libc only does some checks, then calls kernel, and then sets errno. You can easily do this in your program as well (if you need to), and your program will be dozen times smaller, and this will also result in improved performance, just because you're not using shared libraries (static binaries are faster). Using or not using libc in assembly programming is more a question of taste/belief than something practical. Remember, Linux is aiming to be POSIX compliant, so does libc. This means that syntax of almost all libc "system calls" exactly matches syntax of real kernel system calls (and vice versa). Besides, modern libc becomes slower and slower, and eats more and more memory, and so, cases of using direct system calls become quite usual. But.. main drawback of throwing libc away is that possibly you will need to implement several libc specific functions (that are not just syscall wrappers) on your own (printf and Co.).. and you are ready for that, aren't you? :) Here is summary of direct system calls pros and cons. Pros: · smallest possible size; squeezing the last byte out of the system. · highest possible speed; squeezing cycles out of your favorite benchmark. · full control: you can adapt your program/library to your specific language or memory requirements or whatever · no pollution by libc cruft. · no pollution by C calling conventions (if you're developing your own language or environment). · static binaries make you independent from libc upgrades or crashes, or from dangling #! path to a interpreter (and are faster). · just for the fun out of it (don't you get a kick out of assembly programming?) Cons: · If any other program on your computer uses the libc, then duplicating the libc code will actually waste memory, not save it. · Services redundantly implemented in many static binaries are a waste of memory. But you can make your libc replacement a shared library. · Size is much better saved by having some kind of bytecode, wordcode, or structure interpreter than by writing everything in assembly. (the interpreter itself could be written either in C or assembly.) The best way to keep multiple binaries small is to not have multiple binaries, but instead to have an interpreter process files with #! prefix. This is how OCaml works when used in wordcode mode (as opposed to optimized native code mode), and it is compatible with using the libc. This is also how Tom Christiansen's Perl PowerTools reimplementation of unix utilities works. Finally, one last way to keep things small, that doesn't depend on an external file with a hardcoded path, be it library or interpreter, is to have only one binary, and have multiply-named hard or soft links to it: the same binary will provide everything you need in an optimal space, with no redundancy of subroutines or useless binary headers; it will dispatch its specific behavior according to its argv[0]; in case it isn't called with a recognized name, it might default to a shell, and be possibly thus also usable as an interpreter! · You cannot benefit from the many functionalities that libc provides besides mere linux syscalls: that is, functionality described in section 3 of the manual pages, as opposed to section 2, such as malloc, threads, locale, password, high-level network management, etc. · Consequently, you might have to reimplement large parts of libc, from printf to malloc and gethostbyname. It's redundant with the libc effort, and can be quite boring sometimes. Note that some people have already reimplemented "light" replacements for parts of the libc -- check them out! (Redhat's minilibc, Rick Hohensee's libsys , Felix von Leitner's dietlibc , Christian Fowelin's ``libASM'', ``asmutils'' project is working on pure assembly libc) · Static libraries prevent your benefitting from libc upgrades as well as from libc add-ons such as the zlibc package, that does on- the-fly transparent decompression of gzip-compressed files. · The few instructions added by the libc are a ridiculously small speed overhead as compared to the cost of a system call. If speed is a concern, your main problem is in your usage of system calls, not in their wrapper's implementation. · Using the standard assembly API for system calls is much slower than using the libc API when running in micro-kernel versions of Linux such as L4Linux, that have their own faster calling convention, and pay high convention-translation overhead when using the standard one (L4Linux comes with libc recompiled with their syscall API; of course, you could recompile your code with their API, too). · See previous discussion for general speed optimization issue. · If syscalls are too slow to you, you might want to hack the kernel sources (in C) instead of staying in userland. If you've pondered the above pros and cons, and still want to use direct syscalls (as documented in section 2 of the manual pages), then here is some advice. · You can easily define your system calling functions in a portable way in C (as opposed to unportable using assembly), by including , and using provided macros. · Since you're trying to replace it, go get the sources for the libc, and grok them. (And if you think you can do better, then send feedback to the authors!) · As an example of pure assembly code that does everything you want, examine ``Linux Assembly resources''. Basically, you issue an int 0x80, with the __NR_syscallname number (from asm/unistd.h) in eax, and parameters (up to five) in ebx, ecx, edx, esi, edi respectively. Result is returned in eax, with a negative result being an error, whose opposite is what libc would put in errno. The user-stack is not touched, so you needn't have a valid one when doing a syscall. As for the invocation arguments passed to a process upon startup, the general principle is that the stack originally contains the number of arguments argc, then the list of pointers that constitute *argv, then a null-terminated sequence of null-terminated variable=value strings for the environment. For more details, do examine ``Linux assembly resources'', read the sources of C startup code from your libc (crt0.S or crt1.S), or those from the Linux kernel (exec.c and binfmt_*.c in linux/fs/). 5.1.4. Hardware I/O under Linux If you want to do direct I/O under Linux, either it's something very simple that needn't OS arbitration, and you should see the IO-Port- Programming mini-HOWTO; or it needs a kernel device driver, and you should try to learn more about kernel hacking, device driver development, kernel modules, etc, for which there are other excellent HOWTOs and documents from the LDP. Particularly, if what you want is Graphics programming, then do join one of the GGI or XFree86 projects. Some people have even done better, writing small and robust XFree86 drivers in an interpreted domain-specific language, GAL , and achieving the efficiency of hand C-written drivers through partial evaluation (drivers not only not in asm, but not even in C!). The problem is that the partial evaluator they used to achieve efficiency is not free software. Any taker for a replacement? Anyway, in all these cases, you'll be better when using GCC inline assembly with the macros from linux/asm/*.h than writing full assembly source files. 5.1.5. Accessing 16-bit drivers from Linux/i386 Such thing is theoretically possible (proof: see how DOSEMU can selectively grant hardware port access to programs), and I've heard rumors that someone somewhere did actually do it (in the PCI driver? Some VESA access stuff? ISA PnP? dunno). If you have some more precise information on that, you'll be most welcome. Anyway, good places to look for more information are the Linux kernel sources, DOSEMU sources (and other programs in the DOSEMU repository ), and sources for various low-level programs under Linux... (perhaps GGI if it supports VESA). Basically, you must either use 16-bit protected mode or vm86 mode. The first is simpler to setup, but only works with well-behaved code that won't do any kind of segment arithmetics or absolute segment addressing (particularly addressing segment 0), unless by chance it happens that all segments used can be setup in advance in the LDT. The later allows for more "compatibility" with vanilla 16-bit environments, but requires more complicated handling. In both cases, before you can jump to 16-bit code, you must · mmap any absolute address used in the 16-bit code (such as ROM, video buffers, DMA targets, and memory-mapped I/O) from /dev/mem to your process' address space, · setup the LDT and/or vm86 mode monitor. · grab proper I/O permissions from the kernel (see the above section) Again, carefully read the source for the stuff contributed to the DOSEMU project, particularly these mini-emulators for running ELKS and/or simple .COM programs under Linux/i386. 5.2. DOS Most DOS extenders come with some interface to DOS services. Read their docs about that, but often, they just simulate int 0x21 and such, so you do "as if" you are in real mode (I doubt they have more than stubs and extend things to work with 32-bit operands; they most likely will just reflect the interrupt into the real-mode or vm86 handler). Docs about DPMI (and much more) can be found on (again, the original x2ftp site is closing (no more?), so use a mirror site ). DJGPP comes with its own (limited) glibc derivative/subset/replacement, too. It is possible to cross-compile from Linux to DOS, see the devel/msdos/ directory of your local FTP mirror for metalab.unc.edu Also see the MOSS dos-extender from the Flux project from university of Utah. Other documents and FAQs are more DOS-centered. We do not recommend DOS development. 5.3. Windows and Co. This HOWTO is not about Windows programming, you can find lots of documents about it everywhere.. The thing you should know is that Cygnus Solutions developed the cygwin32.dll library , for GNU programs to run on Win32 platform; thus, you can use GCC, GAS, all the GNU tools, and many other Unix applications. 5.4. Your own OS Control is what attracts many OS developers to assembly, often is what leads to or stems from assembly hacking. Note that any system that allows self-development could be qualified an "OS", though it can run "on the top" of an underlying system (much like Linux over Mach or OpenGenera over Unix). Hence, for easier debugging purpose, you might like to develop your "OS" first as a process running on top of Linux (despite the slowness), then use the Flux OS kit (which grants use of Linux and BSD drivers in your own OS) to make it standalone. When your OS is stable, it is time to write your own hardware drivers if you really love that. This HOWTO will not cover topics such as Boot loader code & getting into 32-bit mode, Handling Interrupts, The basics about Intel protected mode or V86/R86 braindeadness, defining your object format and calling conventions. The main place where to find reliable information about that all, is source code of existing OSes and bootloaders. Lots of pointers are on the following webpage: 6. QUICK START Finally, if you still want to try this crazy idea and write something in assembly (if you've reached this section -- you're real assembly fan), I'll herein provide what you will need to get started. As you've read before, you can write for Linux in different ways; I'll show example of using pure system calls. This means that we will not use libc at all, the only thing required for our program to run is kernel. Our code will not be linked to any library, will not use ELF interpreter -- it will communicate directly with kernel. I will show the same sample program in two assemblers, nasm and gas, thus showing Intel and AT&T syntax. You may also want to read Introduction to UNIX assembly programming tutorial, it contains sample code for other UNIX-like OSes. 6.1. Tools you need First of all you need assembler (compiler): nasm or gas. Second, you need linker: ld, assembler produces only object code. Almost all distributions include gas and ld, in binutils package. As for nasm, you may have to download and install binary packages for Linux and docs from the ``nasm webpage''; however, several distributions (Stampede, Debian, SuSe) already include it, check first. If you are going to dig in, you should also install kernel source. I assume that you are using at least Linux 2.0 and ELF. 6.2. Hello, world! Linux is 32bit and has flat memory model. A program can be divided into sections. Main sections are .text for your code, .data for your data, .bss for undefined data. Program must have at least .text section. Now we will write our first program. Here is sample code: 6.2.1. NASM (hello.asm) ______________________________________________________________________ section .data ;section declaration msg db "Hello, world!",0xa ;our dear string len equ $ - msg ;length of our dear string section .text ;section declaration ;we must export the entry point to the ELF linker or global _start ;loader. They conventionally recognize _start as their ;entry point. Use ld -e foo to override the default. _start: ;write our string to stdout mov edx,len ;third argument: message length mov ecx,msg ;second argument: pointer to message to write mov ebx,1 ;first argument: file handle (stdout) mov eax,4 ;system call number (sys_write) int 0x80 ;call kernel ;and exit mov ebx,0 ;first syscall argument: exit code mov eax,1 ;system call number (sys_exit) int 0x80 ;call kernel ______________________________________________________________________ 6.2.2. GAS (hello.S) ______________________________________________________________________ .data # section declaration msg: .string "Hello, world!\n" # our dear string len = . - msg # length of our dear string .text # section declaration # we must export the entry point to the ELF linker or .global _start # loader. They conventionally recognize _start as their # entry point. Use ld -e foo to override the default. _start: # write our string to stdout movl $len,%edx # third argument: message length movl $msg,%ecx # second argument: pointer to message to write movl $1,%ebx # first argument: file handle (stdout) movl $4,%eax # system call number (sys_write) int $0x80 # call kernel # and exit movl $0,%ebx # first argument: exit code movl $1,%eax # system call number (sys_exit) int $0x80 # call kernel ______________________________________________________________________ 6.3. Producing object code First step of building binary is producing object file from source by invoking assembler; we must issue the following: For nasm example: $ nasm -f elf hello.asm For gas example: $ as -o hello.o hello.S This will produce hello.o object file. 6.4. Producing executable Second step is producing executable file itself from object file by invoking linker: $ ld -s -o hello hello.o This will finally build hello executable. Hey, try to run it... Works? That's it. Pretty simple. 7. RESOURCES You main resource for Linux/UNIX assembly programming material is Linux Assembly resources page . Do visit it, and get plenty of pointers to assembly projects, tools, tutorials, documentation, guides, etc, concerning different UNIX operating systems and CPUs. Because it evolves quickly, I will no longer duplicate it in this HOWTO. If you are new to assembly in general, here are few starting pointers: · The Art Of Assembly · x86 assembly FAQ · ftp.luth.se mirrors the hornet and x2ftp former archives of msdos assembly coding stuff · CoreWars , a fun way to learn assembly in general · Usenet: comp.lang.asm.x86 ; alt.lang.asm 7.1. Mailing list If you're are interested in Linux/UNIX assembly programming (or have questions, or are just curious) I especially invite you to join Linux assembly programming mailing list. This is an open discussion of assembly programming under Linux, FreeBSD, BeOS, or any other UNIX/POSIX like OS; also it is not limited to x86 assembly (Alpha, Sparc, PPC and other hackers are welcome too!). List address is . To subscribe send a blank message to . List archives are available at . 7.2. Frequently asked questions (with answers) Here are frequently asked questions. Answers are taken from the ``linux-assembly mailing list''. 7.2.1. How do I do graphics programming in Linux? An answer from Paul Furber : Ok you have a number of options to graphics in Linux. Which one you use depends on what you want to do. There isn't one Web site with all the information but here are some tips: SVGALib: This is a C library for console SVGA access. Pros: very easy to learn, good coding examples, not all that different from equivalent gfx libraries for DOS, all the effects you know from DOS can be converted with little difficulty. Cons: programs need superuser rights to run since they write directly to the hardware, doesn't work with all chipsets, can't run under X-Windows. Search for svgalib-1.4.x on http://ftp.is.co.za Framebuffer: do it yourself graphics at SVGA res Pros: fast, linear mapped video access, ASM can be used if you want :) Cons: has to be compiled into the kernel, chipset-specific issues, must switch out of X to run, relies on good knowledge of linux system calls and kernel, tough to debug Examples: asmutils (http://www.linuxassembly.org) and the leaves example and my own site for some framebuffer code and tips in asm (http://ma.verick.co.za/linux4k/) Xlib: the application and development libraries for XFree86. Pros: Complete control over your X application Cons: Difficult to learn, horrible to work with and requires quite a bit of knowledge as to how X works at the low level. Not recommended but if you're really masochistic go for it. All the include and lib files are probably installed already so you have what you need. Low-level APIs: include PTC, SDL, GGI and Clanlib Pros: very flexible, run under X or the console, generally abstract away the video hardware a little so you can draw to a linear surface, lots of good coding examples, can link to other APIs like OpenGL and sound libs, Windows DirectX versions for free Cons: Not as fast as doing it yourself, often in development so versions can (and do) change frequently. Examples: PTC and GGI have excellent demos, SDL is used in sdlQuake, Myth II, Civ CTP and Clanlib has been used for games as well. High-level APIs: OpenGL - any others? Pros: clean api, tons of functionality and examples, industry standard so you can learn from SGI demos for example Cons: hardware acceleration is normally a must, some quirks between versions and platforms Examples: loads - check out www.mesa3d.org under the links section. To get going try looking at the svgalib examples and also install SDL and get it working. After that, the sky's the limit. 7.2.2. How do I debug pure assembly code under Linux? If you're using gas, you should consult Linux assembly Tutorial by Bjorn Chambless. With nasm situation is a bit different, since it doesnot support gdb specific debugging extensions. Although gdb is source-level debugger, it can be used to debug pure assembly code, and with some trickery you can make gdb to do what you need. Here's an answer from Dmitry Bakhvalov : Personally, I use gdb for debugging asmutils. Try this: 1) Use the following stuff to compile: $nasm -f elf -g smth.asm $ld -o smth smth.o 2) Fire up gdb: $gdb smth 3) In gdb: (gdb) disassemble _start Place a breakpoint at <_start+1> (If placed at _start the breakpoint wouldnt work, dunno why) (gdb) b *0x8048075 To step thru the code I use the following macro: (gdb)define n >ni >printf "eax=%x ebx=%x ...etc...",$eax,$ebx,...etc... >disassemble $pc $pc+15 >end Then start the program with r command and debug with n. Hope this helps. An additional note from ???: I have such a macro in my .gdbinit for quite some time now, and it for sure makes life easier. A small difference : I use "x /8i $pc", which guarantee a fixed number of disassembled instructions. Then, with a well chosen size for my xterm, gdb output looks like it is refreshed, and not scrolling. If you want to set breakpoints across your code, you can just use int 3 instruction as breakpoint (instead of entering address manually in gdb). 7.2.3. Any other useful debugging tools? Definitely strace can help a lot (ktrace and kdump on FreeBSD), it is used to trace system calls and signals. Read its manual page (man strace) and strace --help output for details. 7.2.4. How do I access BIOS functions from Linux (BSD, BeOS, etc)? Noway. This is protected mode, use OS services instead. Again, you can't use int 0x10, int 0x13, etc. Fortunately almost everything can be implemented through system calls or library functions. In the worst case you may go through direct port access, or make a kernel patch to implement needed functionality. That's all for now, folks. $Id: Assembly-HOWTO.sgml,v 1.16 2000/07/11 10:38:10 konst Exp $ Linux Astronomy HOWTO Elwood Downey and John Huggins howto@astronomy.net $Revision: 1.6 $, $Date: 2000/05/03 22:01:25 $ This document shares tips and resources to utilize Linux solutions in the pursuit of Astronomy. ______________________________________________________________________ Table of Contents 1. Introduction 1.1 Knowledge Required 1.2 Scope 1.3 Version 1.4 Copyright 2. Software 2.1 Collections 2.2 Planetarium Programs 2.3 Libraries 2.4 Other 3. Astronomical Images over the web 3.1 List 4. Organizations 5. Hardware Control 5.1 Telescope Control 5.2 CCD Camera Control 6. Installation Help ______________________________________________________________________ 1. Introduction 1.1. Knowledge Required With all the help from major Linux distributions such as SuSE, Redhat, Caldera and many others, Linux based systems are becoming easier to use. However, there is still some need of understanding of basic UNIX skills to make the most of Linux. Thus, this HOWTO will assume that the reader has at least a basic knowledge of using a UNIX system including the ability to compile and install programs. A few resources we have found useful over the years include: · "A Practical Guide to the UNIX System", Mark G. Sobel · "Advanced Programming in the UNIX Environment", the late W. Richard Stevens · "Running LINUX", Matt Welsh et al. · "LINUX Device Drivers", Alessandro Rubini Similarly, this is not a tutorial or reference for astronomy principles or astronomical instrumentation. Astronomy is perhaps the grandest of all sciences, employing widely disparate disciplines in a bold attempt to understand nothing less than the universe itself. Your interests will lead in many directions. A few references we have used include: · "Astronomy with your Personal Computer", Peter Duffett-Smith · "Astronomy on the Personal Computer", Oliver Montenbruck et al · "Textbook on Spherical Astronomy", W. M. Smart · "The Astronomy and Astrophysics Encyclopedia", Stephen P. Maran, ed. 1.2. Scope The authors define the scope of this HOWTO as primarily an index to Linux tools applicable in some fashion to the pursuit of Astronomy. It is *not* our intention to list WWW astronomy references in general. Our own interests tend more towards the technology than the pure science and so we welcome contributions from others who have found Linux tools which contribute in other ways to Astronomy. Please contact us at the address above. 1.3. Version $Revision: 1.6 $ $Date: 2000/05/03 22:01:25 $ The latest version of this document is always available on the Astronomy Net at Astronomy HOWTO. We eagerly accept suggestions from you. Send them to Astronomy HOWTO Editors. 1.4. Copyright Copyright 2000 by Elwood Downey and John Huggins. This document may be distributed only subject to the terms and conditions set forth in the LDP License except that this document must not be distributed in modified form without the author's consent. A verbatim copy may be reproduced or distributed in any medium physical or electronic without permission of the author. Translations are similarly permitted without express permission if it includes a notice on who translated it. Commercial redistribution is allowed and encouraged; however please notify authors of any such distributions. Excerpts from the document may be used without prior consent provided that the derivative work contains the verbatim copy or a pointer to a verbatim copy. Permission is granted to make and distribute verbatim copies of this document provided the copyright notice and this permission notice are preserved on all copies. In short, we wish to promote dissemination of this information through as many channels as possible. However, we wish to retain copyright on this HOWTO document, and would like to be notified of any plans to redistribute this HOWTO. 2. Software 2.1. Collections Here are some links to collections and other indexes of Linux astronomy software. · The Linux for Astronomy CDROM · Scientific Applications on Linux (SAL), Physics and Astronomy · Linux Applications and Utilities Page, Science and Math 2.2. Planetarium Programs Here is discussion of whole programs for use in finding objects, natural and man-made, in the sky which run on Linux. · XEphem has been the pet project of one of us (Downey) for the past 15-odd years. It has grown to become one of the more capable interactive tools for the computation of astronomical ephemerides. · XSky is by Terry R. Friedrichsen, terry@venus.sunquest.com. XSky is essentially an interactive sky atlas. · Skymap is an astronomical mapping program written in Fortran and C for unix workstations by Doug Mink of the Smithsonian Astrophysical Observatory Telescope Data Center. · Xplns reproduces real starry sky on your display of X Window System. · AstrHorloge is a small astronomy software that shows a sky map, give you the coordinates of stars and planets. 2.3. Libraries This section discusses bits and pieces of software that can be used to form the basis for specialized projects. · SLALIB, part of the Starlink Project, is a complete library of subroutines for astrometric computations. · Astrophysics Source Code Library is a collection of links to numerical astrophysical process models. · Astronomy and numerical software source codes is a collection of C codes related to astronomy. · How to compute planetary positions. 2.4. Other Every list needs a miscellaneous section, and this is it for Software. · IRAF is a gigantic but exceptionally capable astronomical analysis system, shepherded over the past 20-odd years by Doug Tody of NOAO. It has accumulated innumerable authoritative contributions from leading astronomers in all areas of astronomical data analysis. If you have a serious interest in astronomical data reduction and significant time to invest, this system will reward you mightily. · Nightfall Eclipsing Binary Star Program 3. Astronomical Images over the web Much effort exists to allow access to Astronomical image file type such as FITS from any web browser. Here are some pointers. 3.1. List The folks at harvard have a list of Image Servers and Image Browsers. · Astronomical Images Over the Web 4. Organizations · The yearly Astronomical Data Analysis Software and Systems, ADAAS, Conference Series provides a forum for scientists and computer specialists concerned with algorithms, software and operating systems in the acquisition, reduction and analysis of astronomical data. The program includes invited talks, contributed papers and poster sessions as well as user group meetings and special interest meetings ("BOFs''). All these activities aim to encourage communication between software specialists and users, and also to stimulate further development of astronomical software and systems. · The linuxastro mailing list, linuxastro@majordomo.cv.nrao.edu, is for people who are interested in porting astronomical software to linux. For more information, see linuxastro. 5. Hardware Control More folks are using Linux to control equipment. Users range from amateur astronomers in the field to professional observatories. 5.1. Telescope Control · OCAAS is a complete Observatory Control and Astronomical Analysis System for Linux. · XEphem has the capability to communicate with a telescope control daemon process. 5.2. CCD Camera Control · Apogee Instruments Inc supports their line of professional CCD cameras under Linux. · SBIG offers some assistance with operating their ST7 and ST8 CCD cameras under Linux. 6. Installation Help You need to know what you're doing with Linux and installing programs, but help is available for some programs. Here are some ways to make life easier. · AstroMake is is a utility intended to make installations of some common astronomical packages (in binary form) easy. · XEphem requires several elements to exist on your machine. Life is much simpler with the CDROM version of the program as it contains an installation script which loads the appropriate precompiled binary for most systems and places all auxiliary files to the correct spots. See XEphem CDROM Linux BRIDGE-STP-HOWTO Uwe Böhme Johann-Heinrich-Abt-Straße 7 95213 Münchberg Germany +49/9251 960877 +49/9251 960878 uwe@bnhof.de Lennert Buytenhenk bridge code maintainer and developer gnu.org buytenh@gnu.org Still draft Copyright © 2000 by Uwe Böhme Revision History Revision v0.00 01 June 2000 Revised by: U.B. Initial Release. Revision v1.01 07 June 2000 Revised by: U.B. Applied patch from Lennert. Corrected some syntactical errors. Completed some brctl commands. Added test output and description. Revision v1.02 08 June 2000 Revised by: U.B. More typo and grammar corrections. Revision v1.03 09 June 2000 Revised by: U.B. The usual typo. Applied Lennert's explanations about the message logs of the pull-the-plug-test. Revision v1.04 11 June 2000 Revised by: U.B. The usual typo. Applied ultimate test dumps. Revision v1.05 17 June 2000 Revised by: U.B. System freeze remark. Modified style sheet. Revision v0.01 25 June 2000 Revised by: U.B. Changes name from BRIDGE-HOWTO to BRIDGE-STP-HOWTO (avoid interference with BRIDGE-HOWTO by Christopher Cole) and restart Version numbering (we where already too far). Lennert Buytenhenk announced as coauthor. _________________________________________________________________ Table of Contents 1. [1]License 2. [2]What Is A Bridge? 3. [3]Rules On Bridging 4. [4]Preparing The Bridge 4.1. [5]Get The Files 4.2. [6]Apply The Patches 4.3. [7]Configure The Kernel 4.4. [8]Compile The Kernel 4.5. [9]Compile The Bridge Utilities 5. [10]Set Up The Bridge 5.1. [11]brctl Command Synopsis 5.2. [12]Basic Setup 6. [13]Advanced Bridge Features 6.1. [14]Spanning Tree Protocol 6.2. [15]Bridge And The IP-Chains 7. [16]A Practical Setup Example 7.1. [17]Hardware-setup 7.2. [18]Software-setup 7.3. [19]See It Work 7.4. [20]Bridge Tests Appendix A. [21]Network Interface Cards Appendix B. [22]Recommended Reading Appendix C. [23]FAQ About The Linux Modular Bridge And STP This document describes how to setup a bridge with the recent kernel patches and brctl utility by Lennert Buytenhek. With developer kernel 2.3.47 the new bridging code is part of the mainstream. On 20.06.2000 there are patches for stable kernels 2.2.14 and 2.2.15. What happend if a penguin crosses a bridge? _________________________________________________________________ 1. License Copyright (c) 2000 by Uwe Böhme. This document may be distributed only subject to the terms and conditions set forth in the [24]LDP License available at [25]http://sunsite.unc.edu/LDP/LICENSE.html _________________________________________________________________ 2. What Is A Bridge? A bridge is a device that separates two or more network segments within one logical network (e.g. IP-subnet). A bridge is usually placed between two separate groups of computers that talk with each other, but not that much with the computers in the other group. A good example of this is to consider a cluster of Macintoshes and a cluster of Unix machines. Both of these groups of machines tend to be quite chatty amongst themselves, and the traffic they produce on the network causes collisions for the other machines who are trying to speak to one another. The job of the bridge is to examine the destination of the data packets one at a time and decide whether or not to pass the packets to the other side of the Ethernet segment. The result is a faster, quieter network with less collisions. The bridging code decides whether to bridge data or to drop it not by looking at the protocol type (IP, IPX, NetBEUI), but by looking at the MAC-address unique to each NIC. Important It's vital to understand that a bridge is neither a router nor a fire-wall. Spoken in simple term a bridge behaves like a network switch (i.e. Layer 2 Switch), making it a transparent network component (which is not absolutely true, bat nearly). Read more about this at [26]Section 3. In addition, you can overcome hardware incompatibilities with a bridge, without leaving the address-range of your IP-net or subnet. E.g. it's possible to bridge between different physical media like 10 Base T and 100 Base TX. My personal reason for starting to set up a bridge was that in my work I had to connect Fast Ethernet components to a existing HP Voice Grade network, which is a proprietary networking standard. Features Above Pure Bridging STP The Spanning Tree Protocol is a nifty method of keeping Ethernet devices connected in multiple paths working. The participating switches negotiate the shortest available path by STP. This feature will be discussed in [27]Section 6.1. Multiple Bridge Instances Multiple bridge instances allow you to have more than one bridge on your box up and running, and to control each instance separately. Fire-walling There is a patch to the bridging code which allows you to use IP chains on the interface inside a bridge. More info about this you'll find at [28]Section 6.2. _________________________________________________________________ 3. Rules On Bridging There is a number of rules you are not allowed to break (otherwise your bridge does). * A port can only be a member of one bridge. * A bridge knows nothing about routes. * A bridge knows nothing about higher protocols than ARP. That's the reason why it can bridge any possible protocol possibly running on your Ethernet. * No matter how many ports you have in your logical bridge, it's covered by only one logical interface * As soon as a port (e.g. a NIC) is added to a bridge you have no more direct control about it. Warning If one of the points mentioned above is not clear to you now, don't continue reading. Read the documents listed in [29]Appendix Appendix B first. If you ever tried to ping an unmanaged switch, you will know that it doesn't work, because you don't have a IP-address for it. To switch datagrams it doesn't need one. The other thing is if you want to manage the switch. It's too much strain, to take a dumb terminal, walk to the place you installed it (normally a dark, dusty and warm room, with a lot of green and red Christmas lights), to connect the terminal and to change the settings. What you want is remote management, usually by SNMP, telnet, rlogin or (best) ssh. For all this services you will need a IP. That's the exception to the transparency. The new code allows you without any problem to assign a IP address to the virtual interface formed by the bridge-instance you will create in [30]Section 5.2. All NIC's (or other interfaces) in your bridge will happily listen and respond to datagrams destined to this IP. All other data will not interfere with the bridge. The bridge just acts like a switch. _________________________________________________________________ 4. Preparing The Bridge This section describes what you need and how you do to prepare your bridge. _________________________________________________________________ 4.1. Get The Files Here you can find a list of the files and down-loads you will need for the setup of the bridge. If you have one of the mentioned files or packages on your distribution, of course there is no need to create network load. I'll only mention the files for the 2.2.14 kernel. If you want to try a different one (e.g. 2.2.15 or the recent development kernel) just replace the kernel version number and look whether you find it. File and package list Unpatched kernel-sources E.g. linux-2.2.14.tar.bz2 available from your local kernel.org mirror. Please check first if you find it in your distribution (take unpatched kernel-sources). If you don't, please check [31]The Linux Kernel Archive Mirror System for a close by mirror and down-load it from there. Bridge patches Note If your kernel is later than 2.3.47 you don't need this. The bridging is part of the mainstream from that version. Get the bridge kernel patches for your kernel version from [32]http://www.openrock.net/bridge/. Identify the file by the kernel number. Note There are also patches allowing to work with IP chains. I never tried it, for I don't see the need to fire-wall inside my LAN, and absolutely no need to bridge against the outer world. Feel free to contribute about that issue. Kernel patches for the stable 2.2 kernel. + [33]bridge-0.0.5-against-2.2.14.diff + [34]bridge-0.0.5-against-2.2.15.diff Bridge configuration utilities You also will need the bridge configuration utilities to set up the bridge [35]Section 5. You can also download them from [36]http://www.openrock.net/bridge/. The current one (as of this writing) is bridge-utils-0.9.1.tar.gz. [37]bridge-utils-0.9.1.tar.gz. _________________________________________________________________ 4.2. Apply The Patches Note If your kernel is later than 2.3.47 you don't need this. The bridging is part of the mainstream from that version. Apply the bridging patch your kernel. If you don`t know how to do that read the Kernel-HOWTO which can be found in your distribution or at [38]http://sunsite.unc.edu/LDP/HOWTO/HOWTO-INDEX.html Example 1. Applying a kernel patch root@mbb-1:~ # cd /usr/src/linux-2.2.14 root@mbb-1:/usr/src/linux-2.2.14 # patch -p1 < \ bridge-0.0.5-against-2.2.14.diff . . _________________________________________________________________ 4.3. Configure The Kernel Now it's time we configure our freshly patched kernel to create the ability to bridge. Run make config, make menuconfig or the click-o-rama make xconfig. Select bridging in the networking option section to be compiled as a module. AFAIK there is no strong reason why not to compile it as a kernel module, whereas I heard rumors about problems with compiling the bridging code directly into the kernel. root@mbb-1:~ # cd /usr/src/linux-2.2.14 root@mbb-1:/usr/src/linux-2.2.14 # make menuconfig . _________________________________________________________________ 4.4. Compile The Kernel Compile your kernel [39]Example 2. Make the new compiled kernel-image to be loaded. I don't know if the kernel patches only apply to the bridging-module or also modify some interfaces inside vmlinuz. So it might not be a error to give a reboot after you updated the kernel-image. Example 2. Commands To Compile Your Kernel root@mbb-1:/usr/src/linux-2.2.14 # make dep clean zImage modules modules_instal l zlilo ... _________________________________________________________________ 4.5. Compile The Bridge Utilities There is no magic about it. Just unzip the utilities-tarball, cd into the newly created directory and give a make. Example 3. Commands To Compile Your Bridge-Utilities root@mbb-1:/usr/src/linux-2.2.14 # cd /usr/local/src root@mbb-1:/usr/local/src/ # tar xzvf bridge-utils-0.9.1.tar.gz ..... .... root@mbb-1:/usr/local/src # cd bridge root@mbb-1:/usr/local/src/bridge # make ..... .... After the compilation shown in [40]Example 3 have worked properly, you can copy the executables to let's say /usr/sbin/ (at least I did). So the commands you have to give should be clear, but to be complete see [41]Example 4 Example 4. Copy The Binaries Of The Utilities root@mbb-1:/usr/local/src/bridge # cd brctl root@mbb-1:/usr/local/src/bridge/brctl # cp brctl /usr/bin/local root@mbb-1:/usr/local/src/bridge/brctl # chmod 700 /usr/bin/local/brctl root@mbb-1:/usr/local/src/bridge/brctl # cp brctld /usr/bin/local root@mbb-1:/usr/local/src/bridge/brctl # chmod 700 /usr/bin/local/brctld Also now you can copy the new man-page to a decent place, as shown in [42]Example 5. Example 5. Copy The Man-page Of brctl root@mbb-1:/usr/local/src/bridge # cd doc root@mbb-1:/usr/local/src/bridge/doc # gzip -c brctl.8 > /usr/local/man/man8/b rctl.8.gz _________________________________________________________________ 5. Set Up The Bridge Make sure all your network cards are working nicely and are accessible. If so, ifconfig will show you the hardware layout of the network-interface. If you have problems making your cards work please read the Ethernet-HOWTO at [43]http://sunsite.unc.edu/LDP/HOWTO/HOWTO-INDEX.html . Don't mess around with IP-addresses or net-masks. You will not need it, until you bridge fully operational an up. After you did the steps mentioned above a modprobe -v bridge should show no errors. Also for each of the network cards you want to use in the bridge the ifconfig whateverNameYourInterfaceHas should give you some information about the interface. If your bridge-utilities have been correctly built and your kernel and bridge-module are OK, then issuing a brctl should show a small command synopsis. _________________________________________________________________ 5.1. brctl Command Synopsis root@mbb-1:~ # brctl commands: addbr add bridge (1 ) addif add interface to bridge (2 ) delbr delete bridge (3 ) delif delete interface from bridge (4 ) show show a list of bridges (5 ) showbr show bridge info (6 ) showmacs show a list of mac addrs (7 ) setageing