Monday, April 30, 2007

Linux Appliance Design

"A week and a half ago I received Linux Appliance Design by Bob Smith, John Hardin, Graham Phillips and Bill Pierce, published by No Starch Press. This is one of No Starch's latest titles and was released in the beginning of April. As a hardware/embedded systems guy I was really eager to get my hands on the book. For those who don't know what the book is about, it's about making an application specific utility, an electronic tool or "appliance" that can be used for a specific task. The book defines an appliance as "A device designed to primarily perform a single function" and that's exactly what they do."

The book revolves around Laddie, an example alarm system for a building. The book includes a complete explanation of the system, what design features it uses, and a LiveCD with the final application for your hacking pleasure.

I have to say, Linux Appliance Design is well written and very, very thorough. This is not a beginner text, the authors focus on Linux programmers who understand C and Linux systems and want to take it a step further than conventional software. If you think this is a book for you, you should be a C/Linux guru, or dust off those old textbooks and brush up on your stuff before you pick it up.

When a friend asked me what was in the book I gave him the response, "Everything you need to make a sweet daemon with any interface you want". This is exactly what Linux Appliance Design is, a library in a book. Nostarch.com has a chapter list for the text, so you can see what I mean.

The layout for the text is well organized and starts where every project should, architecture and design. Personally I felt the authors were beating the dead horse after a couple of pages when everything kept coming back to separating interface from implementation, but hey, it's an important point that a lot of people seem to miss.

An interesting chapter is the explanation of the Run-time-access library the authors developed. Modeled from PostgreSQL, the RTA lib is an impressive solution that allows for daemon communication and configuration from any language that can talk to a database. This library is released under the GPL and can be downloaded from the companion site of the book The RTA is also used for the rest of the book, so don't skip it or you'll have no idea what they are talking about.

The text is not only an explanation of the Laddie system using the RTA, it is an all encompassing design text, which I really like. There are chapters dedicated to building different frontend UIs for the system and communication protocol discussion. This is what transforms the text from book into library. Each chapter can almost stand on its own as an application of that language or program. I was quite impressed with the web interface chapter and how the authors compared web servers and designed a system, and also with the framebuffer chapter on how to make a cool graphical interface.

A common theme for all the chapters is the structure. The authors discuss and design each element they use in the system before looking at one program or daemon. For anyone who has written or read development reports the format is very similar; explain what you designed, why you chose those components, why you passed on others, how the systems works and finally what you would do different next time. This format kind of reminded me lab reports in school, cover all question you think your professor audience might ask.

Overall Linux Appliance Design is a well written, detailed and thorough book with a lot of information. I would recommend this title mainly to someone who is interested in daemon development or server design however it can be read by anyone who wants to see a full project develop cycle.

s1axter is the main poster for Geeksinside.com. Geeksinside.com is a DIY, hardware hacking, technology blog that showcases links, reviews and project

Source : http://books.slashdot.org

Thursday, April 26, 2007

Linux-powered robots go global

Researchers have unveiled internet-controlled wireless robots which are simple enough for "almost anyone" to build with off-the-shelf parts.

Created by boffins at Carnegie Mellon University, the robots can take many forms, from a three-wheeler with a mounted camera to a flower loaded with infrared sensors.

The customisable machines have the ability to link wirelessly to the internet, allowing users to control and monitor the robot's actions from any internet-connected computer in the world.

Hardware and a set of "recipes" that people follow to build the robots have been developed by the technicians.

Both are part of the Telepresence Robot Kit (TeRK) developed by associate professor of robotics Illah Nourbakhsh and members of his Community Robotics, Education and Technology Empowerment (Create) lab.

The stated goal is to make highly capable robots accessible and affordable for college and pre-college students, as well as anyone interested in robots.

At the heart of each TeRK robot is a unique controller called Qwerk that combines a Linux computer with the software and electronics necessary to control the robot's motors, cameras and other devices.

Qwerk, developed by the Create lab and Charmed Labs of Austin, Texas, also connects the robot automatically and wirelessly to the internet so it can be controlled from anywhere.

"The internet connection means that the robots are much more global," said Nourbakhsh.

The robot can send photos or video, respond to RSS feeds, or access the internet to find information, opening a wide range of possibilities. "We are hoping that people notice that the sky's the limit," added Nourbakhsh.

Among the TeRK recipes already available is a small, wheeled robot with a video camera that people might use to keep an eye on their home or pet while at work or school.

Another recipe under development includes environmental sensors for air quality and sound pollution.

A less conventional recipe will produce a robotic flower with six petals that can open and close based on moods or use its petals to play a game of catch.

Source : http://www.computing.co.uk/vnunet/news/2188617/linux-powered-wireless-internet

Tuesday, April 24, 2007

Is Office 2007 A Case Against Linux?

"Have you used Office 2007 yet? It's quite a bit different from Office 2000 and 2003, but, in my opinion, it's a pretty spectacular leap forward. This has been well-document elsewhere, but is quite a shift that the blogosphere can't quite capture as well as first-hand use. It is, in fact, enough of a leap forward that I don't believe OpenOffice, as an Office 200x clone, can compete. OpenOffice is a great product, is free, and seamlessly cross-platform. However, the biggest reason I'm continuing to use Vista right now instead of the latest Ubuntu distro I just installed is Office 2007..."

This is not to say that OpenOffice is inferior or in any way bad before the flames start flying. However, my users (mostly students with Office 2007) have very quickly adapted to the interface, feel that it is intuitive and flexible, and like the tight integration of all of the components. Here in public education, we get such substantial discounts on Microsoft licensing that cost is somewhat less of an issue than it might be as well. Full blown Office 2007 retail would be utterly unobtainable here, but educational pricing is such that the free factor of OpenOffice doesn't give it the win by default.

OpenOffice's greatest assett (aside from being free) is the easy compatibility between Linux, Windows, and Mac platforms (no conversion necessary). While such is not the case for Office, we're in a pretty homogeneous environment here, so the cross-platform usability is less of a selling point.



Tabs, pivot tables, and a slick new interface? Is it worth it? A lot of my students think so. A lot of my older teachers disagree and miss the OpenOffice/Office 2k interface.

So what's the verdict? I find myself still using Vista much more than Ubuntu (even though so far I find that I prefer Ubuntu and it's abundant free software, easy networking, and snappy performance) because I've become so enamored of Office 2007. I think I need to spend some more time with OpenOffice and see if I can end my illicit affair, for fear of being labeled a fanboy. Office 2007 is a hard habit to break, though. I'll keep you posted.

Red Hat buys data integration firm MetaMatrix

Red Hat has reached an agreement to acquire privately held data management firm MetaMatrix, the companies announced Tuesday.

Red Hat executives said MetaMatrix's software will be bundled in with its JBoss middleware as part of a services-oriented architecture package. Financial terms of the deal were not disclosed.

MetaMatrix, based in Waltham, Mass., sells software for accessing disparate data sources. For example, its tools are used to help companies create a single "view" of a customer by pulling information from several different databases.

Red Hat said it plans to change MetaMatrix's business model to align it with the Linux seller's open-source structure. It will move pricing for MetaMatrix products to a subscription model, rather than an upfront one-time license.

Red Hat intends to make all the MetaMatrix software available under an open-source license within a year, said Tim Yeaton, senior vice president of enterprise solutions at Red Hat.

The company also detailed its initiatives to appeal to both software programmers and corporate customers.

Red Hat on Tuesday launched a revamped open-source Web site at JBoss.org that is aimed specifically at developers who participate in open-source projects.

The company also created integrated packages of different JBoss products aimed at corporate customers who want stable software distributions and multiyear support contracts, rather than rapidly updated products.

The first package will include the JBoss application server, Hibernate data-access software, clustering and its Seam Web development tools.

Later in the year, Red Hat will release a JBoss integration package that will include its JBoss ESB, said Shaun Connolly, vice president of product management.

Monday, April 23, 2007

Xandros Showcases New Linux Server at CA World Las Vegas

Xandros, the leading provider of intuitive end-to-end Linux solutions and cross-platform management tools, announces the preview of the all-new Xandros Server Standard Edition 2, a complete, enterprise-grade SMB Linux server package, at CA World Las Vegas booth #371 on April 22-26, 2007. Demonstrations at CA World will highlight the Xandros Virtual Machine Manager, which brings the benefits of server virtualization to small and medium businesses, without requiring Linux expertise. Xandros plans to release the server to manufacturing on May 1.

Building on the success of the first version of Xandros Server, awarded "Best Integrated Solution" soon after its release in 2006, Xandros has achieved a new milestone in functionality and ease of use in this latest server version. Based on the newly released Debian GNU/Linux 4.0 (Etch) and powered by Xandros' revolutionary Managed Community Model, the Xandros Server 2 provides performance and stability without any complexity. Using feedback from current customers and channel partners, Xandros engineered version 2 with a focus on reducing the ever-increasing costs of effectively managing server environments. With new features including virtualization, streamlined systems monitoring and management, and increased Windows network integration, Xandros Server enables SMBs to benefit from the cost savings in consolidation, resource management, and problem prevention, as opposed to problem resolution.

The core of Xandros Server's architecture is its managed community model of server management, where servers, services and applications are tightly coupled into a cohesive community. The managed community model makes every server in the community aware of all the other servers within the community. This unique approach enables administrators to manage a community of servers as an integrated whole, rather than individually, making it less likely to cause costly configuration errors. The Xandros Management Console, an all-graphical portal for administrators to view and manage every element of the community, includes workflow automation and intuitive wizards that eliminate the need for Linux expertise to configure and manage Xandros Servers. The Xandros Management console can run alongside the Microsoft Management Console on the administrator's Windows XP or Vista workstation.

In addition to demonstrations of the final release candidate of Server 2, Xandros is providing visitors to its booth at CA World with special 15% discount coupons for the Xandros Server.

About Xandros

Xandros, Inc. is the leading provider of award-winning, intuitive, end-to-end Linux solutions, including desktops, SMB and enterprise servers, and mixed-environment, cross-platform management tools. Xandros pioneered low-cost, graphical operating systems that leverage existing skill sets and provide seamless Windows-Linux interoperability. It has since extended its initial Debian-based consumer and business desktop line with SMB and enterprise servers and management software, featuring workflow automation and "single pane of glass" remote deployment and administration. The company is headquartered in New York with research and development facilities in Ottawa and Mumbai, and sales and support offices worldwide. For more information, please visit www.xandros.com.

Use Cygwin when you can't use Linux

Attached to those applications that run on Linux but forced to use Windows? Take Cygwin out for a spin.

Linux fans would love to use Linux all of the time, but sometimes it just isn't possible. Maybe a certain application requires that you use Windows, or perhaps you're forced to use Windows at work. Regardless of the reason, if you've become attached to your Linux and open source applications, rest assured that you can use Cygwin to run most of those applications on Windows.

Cygwin is an environment for Windows, similar to Linux, that provides a Linux API emulation layer and a collection of tools that are normally only available on systems that use open source tools, such as Linux, the *BSD's, and Mac OS X.

To begin using this environment, visit the Cygwin Web site at http://cygwin.com/ and download the installer -- the setup.exe file. Once this component is downloaded, double-click it and choose the Install From Internet option, which will download only the packages you want to install. You'll need to specify the root environment of the installation, where packages will be downloaded to, and a mirror site from which to download packages.

After you've picked a mirror, the installer will download some base files and allow you to select packages from various categories. Here you can choose what shells, text-processing tools, databases, desktop applications, and other programs that you'd like to install. Since Cygwin comes with an X server, you can even install GNOME and KDE programs. Better yet, you can compile your own programs to run under Cygwin by installing gcc and friends.

Cygwin will dutifully download and install all of the packages that you select. When the download process is complete, you can fire up Cygwin and an initial bash shell by clicking the Cygwin icon on your desktop.

Modify data in an LDAP directory

LDAP is perhaps one of the largest growing database technologies available for Linux due to its speed and its read-often, write-little design. LDAP is more of a directory than a database—meant to contain data that doesn't change often. Its primary uses are for address books, configuration data, and user authentication. Often, LDAP data is created and left alone with little need for modification.

However, sometimes data that changes little does in fact change. On Linux, the most popular LDAP server is OpenLDAP and it provides command-line tools to add, delete, query, and manipulate the LDAP database. However, it is far from user-friendly, and there are other tools that work much easier. One such tool is the java-based ldapbrowser, available from http://www-unix.mcs.anl.gov/~gawor/ldap/download.html.

On a Linux system, you can start the program by executing:

$ sh lbe.sh

Once the program starts, select the Quick Connect tab or else create a new Session. If you wish to just browse the directory, leave the Anonymous Bind section checked; otherwise uncheck it, and provide the user information to log in. You'll need to fill in the host field and either supply the base DN or have ldapbrowser fetch them for you.

Once you've connected, you can view the LDAP directory via a tree-mode browser. This browser will, depending on the credentials you supplied when creating the session, allow you to view the entire contents of the LDAP directory tree. Selecting an item in the tree view will allow you to view the contents of it in the right-hand pane. Here, if you double-click on an entry, you will be able to modify that entry's contents.

For instance, if you wanted to change a user's login shell, you would select (most likely) the ou=People node, and then the uid=user node, for the user you wish to modify. When the details of the user show up in the right-hand pane, double-click the loginShell entry and change the value of the shell. From that point forward, the user's login shell will be changed.

script records everything in a session: things you type and things you see. It even records color; so if your command prompt or program output contain

The project, backed by a stealth-mode start-up called Qumranet, uses a technical and cultural approach that has quickly drawn powerful allies -- including Red Hat and Linux founder Linus Torvalds.

That success is only a first step in KVM's push to make a mark in virtualisation. But it signals significant influence over the technology, which is spurring a top-to-bottom revamp of the computing industry through its ability to make a single machine behave like many.

But does the world need another virtualisation option? EMC subsidiary VMware rules the roost today. Microsoft is working on a project, called Viridian, that is set to debut in roughly a year. And numerous open-source allies already have focused attention on an open-source rival called Xen. While KVM delivers some new options and competition, it also brings new complications.

"In the near term, KVM will cause some pain because of the market confusion and developer dilution it will cause," said Illuminata analyst Gordon Haff. "But in the longer run, better technical options can only be good for Linux and open source."

KVM, which stands for "Kernel-based Virtual Machine", provides a new Linux-based mechanism for splitting a single physical computer into multiple virtual machines. It's going up against another approach, which uses a low-level software "hypervisor" to perform the same virtualisation function.

The industry is scrambling to adopt virtualisation for a range of reasons: so that groups of inefficient servers can be replaced with a fewer machines; so software can be tested in harmless partitions; and ultimately, so data centers packed with computers can fluidly adjust to shifting priorities.

Industry players such as Novell and IBM say they're watching to see how well KVM fares. But Brian Stevens, the chief technology officer of dominant Linux seller Red Hat, believes KVM is viable.

"There's a year of work, I'd guess, to really make it at parity to where Xen is today...But I think it's going to happen," Stevens said. "The (open-source programming) community is really going to gravitate (to KVM) much more so than (to) Xen."

Qumranet has funding from Sequoia Capital and Norwest Venture Partners, but Chief Executive Benny Schnaider is mum on the company's business plan. In an interview, he said only that Qumranet is "not planning to make money by selling or supporting KVM."

The KVM project got started in early 2006, Schnaider said. That's about the same time that Moshe Bar left XenSource, the Xen commercialisation start-up that he co-founded. Bar, who now is Qumranet's chief technology officer, declined to comment for this story.

Qumranet is based in Santa Clara, Calif., with research and development in Israel. (Qumran is an ancient settlement near the caves where the Dead Sea Scrolls were found.) The start-up has more than 30 employees, most of them engineers, Schnaider said. Given that fewer than a dozen are working on KVM, according to lead programmer and Qumranet employee Avi Kivity, it's a good bet that the company has other technology in the works.

Kivity introduced the world to KVM with an October 19 posting to the Linux kernel mailing list. His patch updated Linux so that higher-level software could take advantage of hardware virtualisation features built into the latest processors from Intel and Advanced Micro Devices. The result: Other operating systems, including Microsoft Windows, can be "guests" running on a Linux host foundation, on newer hardware.

KVM's approach differs from that of Xen, which governs access to hardware using a combination of a lightweight "hypervisor" foundation and a privileged operating system, which is typically Linux.

KVM's method is conceptually closer to one of two approaches used by VMware -- the "hosted" model used in the free VMware Server and Player products. In that model, guest virtual machines run atop a copy of the operating system. In the second VMware approach, used in the higher-end ESX Server product, a full-featured, heavyweight hypervisor governs access to underlying hardware.

Unlike Xen additions to Linux, the KVM patch slipped nearly instantly into the mainstream kernel maintained by Torvalds and a group of deputies.

"We did things the Linux way," Kivity said in an interview. "I am a longtime lurker on the Linux kernel mailing list, so I know what's important to the kernel maintainers and tried to get things right the first time. Where I got things wrong, I fixed them quickly."

He introduced KVM with source code, not words. "Kernel maintainers only take you seriously if the first word in a message is 'PATCH,'" Kivity said.

Using script to record terminal sessions

Most sys admins know the importance of keeping an action log where various tasks, configuration changes, etc. are kept. Simple logs indicating, "I did this" or "John did that" may be sufficient in some organizations, but for some, full transcripts of changes made are desired. Doing a copy-and-paste of terminal output can be tedious at best, so one answer is to use a little-known program called script, which is part of the util-linux package on most Linux distributions.

script records everything in a session: things you type and things you see. It even records color; so if your command prompt or program output contains color, script will record it.

To use script, simply execute:

$ script

By default, it writes to a file called typescript in the current directory. From then on, everything you type will be recorded to that file. To write to a different file, simply use script /path/to/file.

When you're done, type exit. This will close down the script session and save the file. You can now examine the file using cat or any other program.

The downside of using script is the fact that it records all special characters, so your output file will be full of control characters and ansi escape sequences. This can be avoided by using a very Spartan shell with script:

$ SHELL=/bin/sh PS1="$ " script

When using script, don't use interactive programs or programs that manipulate the screen, like vi or top. They will ruin the output of the session. Otherwise, any command line programs you use and the steps you take to accomplish a task will be recorded. If you do need to edit a file in the transcript, consider exiting the script session and restarting it after the file edit with script -a, which will append the new session to the old session.

Learn the power features of zsh

The Z Shell (zsh) is a power-shell that is not often used by many Linux users. The reason for this is that most Linux distributions install, and make default, the bash shell. zsh is packaged for virtually every Linux distribution and installation is usually an apt-get, urpmi, or yum away.

One of the great features of zsh is tab-completion; it also handles all the logistics of tab-completion and is extremely easy to implement, just by adding two lines to your ~/.zshrc file:

autoload -U compinit
compinit

The compinit function is what loads the tab-completion system by defining a shell function for every utility that zsh is able to tab-complete. By using autoload, you can optimize zsh by telling it to defer reading the definition of the function until it's actually used, which speeds up the zsh startup time and reduces memory usage.

Using the setopt command, you can configure over 150 different options that impact how zsh works. For instance:

setopt autocd

The line above will allow you to change directories simply by typing the name of the directory (no need to use cd). Or, you might wish to use more powerful globbing or pattern matching features, which can be done by adding the line below to ~/.zshrc:

setopt extended_glob

The various zsh options that can be set with setopt are documented in the zshoptions manpage:

$ man zshoptions

Note that the ~/.zshrc file is sourced for both interactive and login shells. If you want, to set options for when zsh is run non-interactively (i.e., via a cronjob), then you'll want to add those to ~/.zshenv.

Another nice feature with zsh is how it handles prompts. These can be custom or they can be loaded via zsh's prompt system, which contains a number of "stock" prompts that might be suitable. For instance, to use the prompt system enter:

autoload -U promptinit
promptinit
prompt fire

To list the available fonts, on the command-line, enter "prompt -l". To define your own prompt, use the $PS1 variable, but zsh uses different format specifiers than bash, so a nice prompt might look like:

PS1=$'%{\e[1;32m%}%n@%m%{\e[0m%}:%B%~/%b >%# '

The resulting prompt looks like:

joe@odin:~/ >%

with the user and hostname in bright green.

Set up Logical Volume Manager in Linux

Logical Volume Manager (LVM), is a mechanism to create virtual drives out of physical drives. These virtual (or logical) drives can then be manipulated in interesting ways: They can be grown or shrunk, and they can span more than one physical disk. An LVM in and of itself is exciting because it allows you to turn a number of disks into one massive volume, but it becomes even more interesting when throwing RAID into the mix.

Using LVM with a RAID-1 mirroring system provides large devices with redundancy. This is important because if one drive in an LVM volume set dies, it could leave your data in an inconsistent (or entirely gone) state. Using LVM over RAID is really no different than using LVM on a physical disk; instead of adding physical volumes to your LVM set, you would add md devices, using /dev/md0 rather than /dev/hda1.

To begin, creating an LVM set from physical disks is quite easy. The following commands will get you started. This also assumes you're using a fairly recent Linux kernel; most distributions have LVM available for install.

# modprobe dm-mod
# vgscan
# fdisk /dev/hda

The first step is to partition the drive. You don't have to give the entire drive over to LVM if you don't want to. Create a partition, say hda1, and give it the type 8e, which is for Linux LVM. Do the same with the second disk (say, hdb). After that, execute:

# pvcreate /dev/hda1
# pvcreate /dev/hdb1

These commands will make the partitions available to LVM for use. The next step is to create the volume group:

# vgcreate data /dev/hda1 /dev/hdb1

This will create a volume group called "data" and assign /dev/hda1 and /dev/hdb1 to it. If you wish to add a third drive to the group later, you would use vgextend data /dev/hdc1. To obtain information on your volume group, use vgdisplay and the volume group name. For information on the physical volume, use pvdisplay. You need to use vgdisplay to find out how many physical extents are available for use. Here, we'll assign them all to one large logical device:

# vgdisplay data | grep "Total PE"
# lvcreate -l 10230 data -n files

The number of physical extents available was 10230, and all were assigned to the logical volume "files." Now you can format, manipulate, and mount the volume just as any other device, except the device name will be /dev/data/files or /dev/mapper/data-files:

# mke2fs -j /dev/data/files
# mkdir -p /srv/files
# mount /dev/data/files /srv/files

There is a lot more information about manipulating and maintaining LVM volumes (http://aplawrence.com/Linux/lvm.html) out there, and it would be wise to become familiar with it (http://tldp.org/HOWTO/LVM-HOWTO/), but it's quite straightforward to create your first LVM volume.

10 things you should do to a new Linux PC before exposing it to the Internet

1: Your purpose

Linux, like Microsoft Windows, is simply a computer operating system. When I talk to friends or co-workers who are embarking on the Linux experience for their initial time, this is the first point I stress. Linux in itself is not a magic wand that can be waved and make all sorts of computing problems disappear. While Windows has its own set of problems, so too does Linux. There is no such thing as a perfect or completely secure computer operating system. Will the machine be a desktop computer or a server; purpose is a key to understanding how to initially install and configure your Linux PC.

2: Installation

Unlike Windows, Linux does not present itself as a "server" version or as a "desktop" version. During a typical installation of Linux the choice is yours as to exactly what software you wish to install and therefore exactly what type of a system you are constructing. Because of this, you need to be aware of the packages that the installation program is installing for you. For example, some distributions will configure and start a Samba server or a mail server as part of the base install. Depending upon the purpose of your Linux PC and the security level you are prepared to accept, these services may not be needed or desired at all. Taking the time to familiarize yourself with your distributions' installer can prevent many headaches and/or reinstalls down the road.

3: Install and configure a software firewall

A local software firewall can provide a "just in case" layer of security to any type of network. These types of firewalls allow you to filter the network traffic that reaches your PC and are quite similar to the Windows Firewall. The Mandriva (http://wwwnew.mandriva.com/) package called Shorewall (http://www.shorewall.net/) along with a component of the Linux kernel called Netfilterprovides a software firewall. By installing and configuring Shorewall during the installation process, you can restrict or block certain types of network traffic, be it coming to or going out from your PC.

To access and configure your firewall for Mandriva simply run the mcc (or Mandriva Control Center) command from a command prompt or, depending upon your graphical environment, you may be able to access the Mandriva Control Center from your base system menu. In the security options, select the firewall icon and you will be presented with a list of common applications that may need access through your firewall. For example, checking the box for "SSH server" will open port 22 needed by the Secure Shell server for secure remote access. There is also an advanced section which will allow you to enter some less commonly used ports. For example, entering "8000/tcp" will open port 8000 on your PC to TCP-based network traffic.

Blocking or allowing network traffic is one layer of security, but how do you secure a service that you do allow the Internet or your intranet to connect to? Host based security is yet another layer.

4: Configuring the /etc/hosts.deny and /etc/hosts.allow files

In the preceding section we looked at the example of opening the Secure Shell service to network traffic by opening port 22 on our firewall. To further secure this server from unwanted traffic or potentially hackers, we may wish to limit the hosts or computers that can connect to this server application. The /etc/hosts.deny and /etc/hosts.allow files allow us to do just that.

When a computer attempts to access a service such as a secure shell server on your new Linux PC the /etc/hosts.deny and /etc/hosts.allow files will be processed and access will be granted or refused based on some easily configurable rules. Quite often for desktop Linux PC's it is very useful to place the following line in the /etc/hosts.deny file:
ALL: ALL

This will deny access to all services from all hosts. It seems pretty restrictive at first glance, but we then add hosts to the /etc/hosts.allow file that will allow us to access services. The following are examples that allow some hosts remote secure shell access:
sshd: 192.168.0.1 #allow 192.168.0.1 to access ssh
sshd: somebox.somedomain.com #allow somebox.somedomain.com to access ssh

These two files provide powerful host based filtering methods for your Linux PC.

5: Shutoff or remove non-essential services

Just like Windows there can be services running in the background that you either don't want or don't have a purpose for. By using the Linux command chkconfig you can see what services are running and turn them on and off as needed. Services that are not running don't provide security holes for potential hackers and don't take up those precious CPU cycles.

6: Secure your required services

If your new Linux PC has some services that will receive connections from the Internet make sure you understand their configurations and tune them as necessary. For example, if your Linux PC will receive secure shell connections make sure you check the sshconfig file (for Mandriva it is /etc/ssh/sshd_config) and disable options like root login. Every Linux PC has a root user so you should disable root login via ssh in order to dissuade brute force password crack attempts against your super-user account.

7: Tune kernel networking security options

The Linux kernel itself can provide some additional networking security. Familiarize yourself with the options in the /etc/sysctl.conf file and tune them as needed. Options in this file control, for example, what type of network information is logged in your system logs.

8: Connect the PC to a router

A hardware router is a pretty common piece of household computer hardware these days. This is the front line security to any home or business network and provides multiple PC's to share one visible or external Internet address. This is generally bad news for any hacker or otherwise malicious program that may take a look at your new Linux PC as it blocks any and all network traffic that you don't specifically allow. Home networking routers are just smaller versions of what the big companies use to separate their corporate infrastructure from the Internet.

9: Update

Always keep the software on your computer up to date with the latest security patches should you be running Linux, Windows, BSD or WhoKnowsWhat. Your distribution will release regular security patches that should be applied and are available off the Internet. As with Windows, this should always be your first Internet destination.

10: Other software

Your second Internet stop may be to install some other hardening or system monitoring software.

Bastille-Linux (http://www.bastille-linux.org/) is a program that can be used to "harden" or secure certain aspects of your new Linux PC. It interactively develops a security policy that is applied to the system and can produce reports on potential security shortcomings. On top of that it is a great tool to use for learning the in and out of securing your Linux PC.

Tripwire (http://sourceforge.net/projects/tripwire) is a software package that monitors your system binaries for unauthorized modifications. Often a hacker may modify system binaries that may be useful in detecting a system intrusion. The modified programs would then report false information to you allowing the hacker to maintain his control over your system.

How will add user and command into Sudoueser?

Try this one
Code:

nagios ALL=NOPASSWD:/etc/init.d/xinetd

But the drawback is, nagios will be able to do other than "xinetd status".


Linux Command Line Tips

Command line skills are something you pick up over time. When something needs to be done, you work out how, and from then on you know how to do it. Few take the time to systematically learn the ins and outs of the tools at their disposal, however, and so may not be aware of all the possibilities of even the most basic utilities. In this tip we'll be looking at one of the staples of any shell toolbox: the find utility.

As the name suggests, find is a program for searching your disk for files and directories satisfying a given criteria. By default, it starts in the current directory and traverses down to all lower directories. Find is able to search based upon a number of different file attributes and also perform actions on the results, usually running some program for each result.

Let's take a look at a few examples: firstly, to find all html files in the current directory or lower you would use:

find -name "*.html" -type f

Now, this command has two tests, the first, "-name", is used to test against each filename in the search. If you need this to be case insensitive, use "-iname" instead. The second test is "-type" which is used to specify the kind of thing you are interested in. The "f" says we are looking for regular files, however we could have used "d" for directories or "l" for symbolic links, for example. A full list of options can be found on the find manpage.

In this example we don't specify a location since we were looking in the current directory. You can start the search in any directory (or directories) on your system, for example if we know that the html files will be in one of two places then we could make the search quicker and more accurate by specifying a start point:

find /var/www /home/nickg/public_html -name "*.html" -type f

This now searches in the Web server root, my home html root, and their subdirectories. Hopefully this will mean we get what we're looking for and not erroneous files like the Web browser cache, or html help files.

Find traverses down all subdirectories by default, but you can control this behaviour by specifying the maximum depth. If in the previous case you wanted only to search those two directories but go no further you can add "-maxdepth 1" to the command. Setting the max depth to 0 means that only the files given on the command line will be tested. Similarly you can set a minimum depth, so that you can avoid files sitting in the root.

Another use for find is to search for files belonging to a given user. So to search for all files belonging to me on my system I use:

find / -user nickg

The same thing can be done for searching based upon group, using the "-group" test.

The next category of tests relates to time, allowing you to search for files based on when they were last created, accessed or modified by using "-ctime", "-atime" and "-mtime" respectively. For example to find files created in the past day:

find -ctime -1

Note the "-" before the 1, this means that we're looking at a range backwards from the current day. If you need more precision you can use the by minute variations, "-cmin", "-amin" and "-mmin". If you've just made a mistake and you're not sure which files may have been affected, it's easy to narrow things down using find:

find -mmin -5

The standard action for find to perform on the files is to print out the filename, which is why if you've been following along you've ended up with these long lists of filenames. This is perfect if you want to use this data as input into another filter, but if you need a little more information about the results you can get find to give you the same sort of output as you'd get from ls -l:

find -user nickg -iname "*.html" -ls

Which will give permission and time info.

Lastly, you can get find to run any program on each result by using the "-exec" action. The following program will delete all files in your home directory with the extension ".tmp":

find -name ".tmp" -exec rm {} \;

The two braces ("{}") are replaced by each filename matched, and the escaped semicolon is needed to tell find when the command is finished. Find is often used in combination with chmod to quickly change file permissions over a large set of files, or with grep and sed to selectively find or modify text using regular expressions. This is just the tip of the iceberg where find is concerned, by using it as the input to a script you can automate time consuming tasks, such as cleaning up files that have not been accessed in a year, or automatically backing up modified files. This kind of power means that find remains one of the best tools a Linux user has at their disposal.

Howto Install NVIDIA 3D Drivers

**Installing the Drivers**

1. Make sure you have the kernel-sources, gcc and make packages installed.

2. Download the latest driver from Nvidia's site

3. Go into runlevel 3 (no GUI). This can be acheived several ways:

a) By typing CTRL+ALT+F1(or F2-F6), then logging in as root and typing init 3

b) By typing a 3 at the GRUB boot prompt.

c) By editing your /etc/inittab. See below for details.

d) Debian/Ubuntu users may need to use /etc/init.d/gdm stop instead.

4. Log in as root user, if you aren't already.

5. Find the driver you just downloaded and run it using something like sh NVIDIA-1.0.8174.run

6. If it gives you any of the errors below, ignore them and continue:

a) It warns you about rivafb support.
b) It tells you it can't find a precompiled kernel module off the Nvidia website.

7. Stay logged in as root and type modprobe nvidia

NOTE:As of version 8174 of the Nvidia driver, you no longer need to manually edit your xorg.conf file. Skip steps 8 and 9 if you are installing this version or newer.

8. Edit your /etc/X11/xorg.conf in the section marked "Devices" that looks something like this:
Code:

Section "Device" Identifier "Nvidia Geforce 2" Driver "nv"


9. Change the "nv" line to "nvidia"

NOTE: Some distributions use XFree86 instead of X.org. The steps are the same, you're simply editing a different file: the /etc/X11/XF86Config-4 file.

10. Log out as root, and back in as a regular user, then type startx

11. If you see the Nvidia logo flash then you're done. If not your X Windows will error out. Start a thread, post the errors, and we'll try and help you from there.


**Editing your /etc/inittab**
Some distributions require you to edit your /etc/inittab file in order to boot in to non-graphical mode, which is required in order to install the Nvidia drivers. Here is how you do it:

1. Log in as root user from a console window by typing su and your root (administrator) password when it prompts you.

2. Open up your /etc/inittab file with a simple text editor. Any one will do but I like pico, so for example purposes that's what I'll use:
Code:

pico /etc/inittab


3. Look for a line that looks something like this:
Code:

id:5:initdefault


4. Change the 5 to a 3

5. Save the file and reboot.

6. Once Linux goes through its regular boot screens you should be greeted with a simple text login screen. Continue from step 4 above.

Java Extend Global Distribution with Canonical's Ubuntu Linux Release

Sun Microsystems and Canonical Ltd announced yesterday the availability of a complete, production quality Java technology stack and developer tools with the latest release of Ubuntu v7.04.

According to a statement on Sun’s website, Ubuntu v7.04 is set to make it easier for GNU/Linux developers to leverage the Java platform in their applications.

This stack, which is comprised of key popular Java technologies such as GlassFish v1 (the open source Java Platform, Enterprise Edition 5 implementation), Java Platform, Standard Edition, Java DB 10.2 and NetBeans IDE 5.5 -- will be available in the Multiverse component of the Canonical sponsored Ubuntu repository effective April 19.

Ubuntu users can install the solution over the network with apt-get and other standard software management tools.

"Sun and Canonical are working together to bring the full power of the Java platform in a fully integrated and easy to install fashion to the free and open source software communities," said Ian Murdock, chief operating systems officer, Sun.

"Packaging NetBeans and Java for Ubuntu ensures that we are able to distribute it efficiently to the huge community of Ubuntu users," said Mark Shuttleworth, founder of Ubuntu.

"Developers who are interested in Sun's latest Java technology can install it instantly if they are running Ubuntu. As Java components are released under free software licenses, we will consider these components for inclusion in the core of Ubuntu."

The hard truth about installing Linux

Having had a go at a few more Linux installations than the average newbie, I can say that Ubuntu is probably about as easy an install as it's going to get. Sure you can run into hardware driver issues but Ubuntu advocates are right when they say it's probably just as easy and maybe easier than installing Windows. However, at this point in time it needs to be.

The hard fact is that most computer users don't want to go through the pain of installing an operating system. They just want to turn on their computer and start using it.

Probably the single most important factor in the success in Windows, aside from Microsoft's marketing muscle, is that you can walk into a shop and buy a working Windows computer. What's more, you can be reasonably confident that the computer will work with a wider range of software and hardware than either a Linux computer or Apple Mac.

Within its own walled garden, a Mac can beat the pants of a Windows computer for ease of use, reliability and overall design elegance. However, most users still opt for a Windows computer because it allows them a wider choice of hardware and software.

Ubuntu, being a Linux distribution, is a more stable and secure operating system than Windows, not to mention free to download. Yet Windows still reigns supreme in terms of market share. Why?

As far as the Linux market is concerned, Ubuntu, like it or not is being increasingly touted by many as the most likely candidate to prise disaffected Windows users off the Microsoft teat. However, despite being easy to install, Ubuntu, like all Linux distributions, is in most cases not plug and play.

I know many Ubuntu users are going to be jumping up and down right now ready to flame me and tell me how they got Ubuntu to work with their hardware, peripherals and wireless network the very first time. However, I've been to your forum and seen how many haven't. And these are computer users who take the time out to visit a forum.

I also have no doubt that many Windows XP users who have attempted to upgrade to Vista using the boxed software are having similar problems. The Wall Street Journal technology writer Walter Mossberg confessed to being amazed at how many makers of Windows software and hardware have failed to update their products to work smoothly, or to work at all, with Vista.

Where I am going with this is that the key to the success of Ubuntu or any other Linux distribution in the consumer market is getting mainstream hardware manufacturers to release pre-installed plug and play versions of the operating system. Dell has already committed to making this happen and other hardware makers will be watching to see how it goes.

Like many computer users, I want to be able to walk in to a computer store and walk out with a working Linux computer, whether it be a Dell, HP, Acer or a white box. I want to take that computer home and be able to connect it to my printer, scanner and digital camera and have them recognised. If it's a laptop, I want to take that computer into a wireless hotspot, click on a view available networks icon and, provided I have the WEP key, be able to connect to the Internet.

I know all of that is a lot to ask but to be honest if Linux and its most popular distributions, like Ubuntu and Suse, is to gain more than a couple of percent market share on the desktop then that is what is necessary. Perhaps in order to make it happen, the funders of the most popular distributions should be spending as much of their budgets on selling the benefits of Linux to consumers and hardware vendors as on development. Microsoft certainly does.

Homegrown high-performance computing

Once the domain of monolithic, multimillion-dollar supercomputers from Cray and IBM, HPC (high-performance computing) is now firmly within reach of today's enterprise, thanks to the affordable computing power of clustered standards-based Linux and Microsoft servers running commodity Intel Xeon and AMD Opteron processors. Many early movers are in fact already capitalizing on in-house HPC, assembling and managing small-scale clusters on their own.

Yet building the hardware and software for an HPC environment remains a complex, highly specialized undertaking. As such, few organizations outside university engineering and research departments and specialized vertical markets such as oil and gas exploration, bioscience, and financial research have heeded the call. No longer borrowing time on others' massive HPC architectures, these pioneers, however, are fast proving the potential of small-scale, do-it-yourself clustering in enterprise settings. And as the case is made for few-node clusters, expect organizations beyond these niches to begin tapping the competitive edge of in-house HPC.

The four case studies assembled here illustrate the pain and complexity of building a successful HPC environment, including the sensitive hardware and software dependencies that affect performance and reliability, as well as the painstaking work that goes into parallelizing serial apps to work successfully in a clustered environment.

Worth noting is that, although specialized high-performance, low-latency interconnects such as Myrinet, InfiniBand, and Quadric are often touted as de-facto solutions for interprocess HPC communications, three of the four organizations profiled found commodity Gigabit Ethernet adequate for their purposes -- and much less expensive. One in fact took every measure possible to avoid message passing and cutting-edge interconnects in order to enhance reliability.

New to the HPC market, Microsoft Windows Compute Cluster Server 2003 proved appealing to two organizations looking to integrate their HPC cluster into an existing Microsoft environment. So far, results have been positive.

Finally, one organization found that delegating much of the hardware and software configuration to a specialized HPC hardware vendor/integrator made the whole process considerably easier.

BAE Systems tests and tests some more

When it comes to delivering advanced defense and aerospace systems, the argument in favor of developing an in-house HPC cluster is overwhelming. Perhaps it's not surprising then to find that the technology and engineering services group at BAE Systems already has a fair amount of experience constructing HPC clusters from HP Alpha and Opteron-based Linux servers. Integrating previous HPC systems into the , a U.K.-based global defense company's enterprise, however, has proved costly.

"We've found the TCO implications of maintaining two or more disparate systems -- such as Windows, Linux -- and Unix, to be too high, particularly in terms of support people," says Jamil Appa, group leader of technology at engineering services at BAE. "We're looking to provide a technical computing environment that integrates easily with the rest of our IT environment, including systems like Active Directory."

The group is currently assessing two Microsoft Windows Compute Cluster Server 2003 clusters -- both of which have been in testing for several months now. Tools built from Microsoft .Net 3.0 Workflow Foundation and Communications Foundation have enabled BAE engineers to create an efficient workflow environment in which they can collaborate effectively during the design process and access relevant parts of the systems from their own customized views with tools relevant to their tasks. One test bed is a six-node cluster of HP ProLiant dual-core, dual-processor Opteron-based servers; the other is a 12-node mix of Opteron- and Woodcrest-based servers from Supermicro.

If there's anything that BAE has learned from its testing, it's that little changes can have big performance implications.

"We're running our clusters with a whole variety of interconnects, including Gigabit Ethernet, Quadric, and a Voltaire InfiniBand switch," Appa says. "We've also been running both Microsoft and HP versions of MPI [Message Passing Interface]. We've found that all these elements have different sweet spots and behave differently depending on the application." In the long run, this testing will enable the technology and engineering services group to provide other BAE business units looking to implement HPC with their own personal HPC "shopping lists."

As for interfaces, "depending on the application, the size of your cluster (preferably small), and the types of switches you use, Gigabit Ethernet really isn't that bad," Appa says. His group has been using Gigabit switches from HP, which "for our purposes, are very good."

Appa has also tested several compilers, and he cautions not to skimp on these tools: "A US$100 compiler might make your code run 20 percent slower than a top-end compiler, so you end up having to pay for a machine that is 20 percent larger. Which is more expensive?"

Each of Appa's configurations sits on three networks: one for message passing, one for accessing the file system, and one for management and submitting jobs. To access NAS, Appa uses iSCSI over Gigabit Ethernet, rather than FC (Fibre Channel), and has a high-performance parallel file system consisting of open source Lustre object storage technology. Why? "As clusters get larger and you have more cores running processes that are all reading one file on your file system, your file system really needs to scale or you'll be in trouble," Appa explains.

Meanwhile, Windows Compute Cluster has simplified both cluster management and user training -- which makes for additional benefits when it comes to freeing up staff for the more vital task of optimizing BAE apps. Although BAE's software is already set up for HPC, Appa believes the whole process of parallelizing existing apps is reaching a turning point. "Our algorithms date back to the '80s and do not make best use of multicore technologies," he says. "We're all going to have to reconsider how we write our algorithms or we'll all suffer."

Although each endeavor to bring HPC in-house will differ based on an enterprise's clustering needs, BAE's Appa has some sage advice for anyone considering the journey.

"You can't assume that somebody will come along with a magic wand and give you the perfect HPC solution," Appa says. "You really need to try everything out, especially if you have in-house code. There's so much variation and change in HPC technology, and so much is code-dependent. You really have to understand the interaction between the hardware and software."

Luckily, those attempting to bring HPC in-house will not be alone. "The HPC community itself is quite small and very open and willing to share valuable information," Appa says.

Appa points out that Fluent has an excellent benchmarking site that demonstrates performance variations among various hardware and software combinations. In his case, the Microsoft Institute for High Performance Computing at the University of Southampton provided sound advice on what hardware worked and what didn't, particularly during the beta phase.

Virginia Tech starts from scratch

At Virginia Tech's Advanced Research Institute (ARI), constructing an HPC cluster for cancer research has been an educational experience for the electrical and computer engineering grad students involved.

With little prior HPC experience, the students built a 16-node cluster and parallelized apps they had written in MATLAB, a numerical programming environment, over the course of several months. The project taps huge amounts of data acquired from biologists and physicians to perform molecular profiling of cancer patients. The students are also working on vehicle-related data for transportation projects.

Rather than make every aspect a learning experience, when it came to choose an HPC platform, the students and professors decided to stick with what they already knew: Microsoft Windows.

"Our students had already been running MATLAB and all their other programs on Windows," says Dr. Saifur Rahman, director of ARI. "We didn't want to have to retrain them on Linux." As was the case at BAE Systems, there were also obvious advantages to a cluster that could integrate easily with the rest of ARI's Windows infrastructure, including Active Directory.

Microsoft had already approached Virginia Tech to be an early adopter of Windows Compute Cluster Server 2003, so Dr. Rahman and his team said yes and started looking for the right hardware. They vetted several vendors, but when they found out Microsoft was performing its own testing on Hewlett-Packard servers, they decided to go with HP. "We knew we'd need help from Microsoft to fix various bugs," says Dr. Rahman, "and since all their experience was on HP servers, we felt we'd have the most success with HP."

So with help from Microsoft and HP, ARI installed 16 HP ProLiant DL 145 servers with dual-core 2.01GHz AMD Opteron 270 processors and 1GB of RAM each. On the same rack, ARI installed 1TB of HP FC storage. The rack also includes one head node, as well as an HP ProLiant DL385 G1 server with two dual-core 2.4GHZ AMD64 processors and 4GB of RAM.

As did BAE Systems, ARI decided to stick with Gigabit Ethernet for its cluster interconnect, mainly because it was what the team knew. "There are other interconnects that are faster, but we've found that Gigabit Ethernet is pretty robust and works fine for our purposes," Dr. Rahman says. And after some servers overheated, ARI placed the entire cluster in a 55-degree Fahrenheit chilled server room.

ARI found parallelizing MATLAB apps to be a significant challenge requiring a number of iterations. "The students would work on parallelizing the algorithms, then run case studies to verify the results they were getting with the clustered applications were similar to results they got when they ran one machine," Dr. Rahman says.

At first, the results weren't coinciding, and the students had to learn more about how to parallelize effectively and clean up what they had already coded. "We missed some important relationships at first," Dr. Rahman says. With some help from MATLAB, it took two graduate students about a month to get the app parallelization right.

Dr. Rahman feels that the team's diverse expertise was a large factor in the project's success. One of the grad students had deep knowledge of molecular-level data quality, biomarkers, and the relevance of different data types; another offered a lot of hardware expertise; and the IT person had much experience interacting with vendors effectively. MATLAB provided help in determining which toolboxes were relevant to the task.

"When we went to MATLAB, they were just getting started with HPC," Dr. Rahman says. "I hope they will start to pay more attention, as it would be nice if they were all ready so we didn't have to spend months on this."

There were also hardware communications glitches.

"At first we had some problems controlling the servers as they talked to each other and the head node," Dr. Rahman says. "Sometimes they wouldn't respond. In other cases we wouldn't see any data coming through." Solving the problem took a lot of reconfiguring and reconnecting. "Perhaps we were giving the wrong commands at first. We're not sure," he adds. There were also problems with incorrect server and software license manager configurations.

Dr. Rahman says that managing the cluster has been relatively trouble-free with Windows Compute Cluster Server 2003 and adds that if he could do this all over again, he'd send his students to Microsoft for a longer time to learn more of what Microsoft itself has discovered about building clusters with HP servers. The use of HPC has enabled ARI researchers to dive much more deeply into molecular data, not only analyzing differences in relationships among disparate classes of subjects, but also revealing more subtle but important variations within each class.

Uptime counts for Merlin

Whereas most HPC implementations are the province of scientists and engineers hidden away in R&D departments, Merlin Securities' HPC solution interfaces directly with its hedge fund customers. That's why 24/7 uptime and security was a key HPC design requirement for Merlin, right alongside performance.

"We had to be extremely risk-averse in designing our cluster and choosing its components," says Mike Mettke, senior database administrator at Merlin.

A small prime brokerage firm serving the hedge fund industry, Merlin must contend with several larger competitors that benefit significantly from the economies of scale. Morgan Stanley, Merrill Lynch, and Bear Stearns, for example, run large mainframes that analyze millions of trades at the end of the day and return reports via batch processing the next morning. Merlin stakes its competitive edge on using its HPC cluster to deliver trading information in real time and allowing customers to slice and dice data multiple ways to uncover valuable insights, such as daily analyst trading performance as compared with other analysts, other market securities, and numerous market benchmarks. "We focus on helping clients explain not only what happened but why it happened," says CTO Amr Mohamed.

To do this, Merlin built its own highly parallelizing analysis tools, which it runs on a high-performance Oracle RAC (Real Application Cluster) installed on a rack of Dell PowerEdge 1850 and 2850 dual-core Xeon servers. Data storage is provided by EMC CLARiiON 2Gbps and 4Gbps FC storage towers. Sitting on top of Oracle is Merlin's HPC task-scheduling software, also created in-house, and an Oracle data mart that serves as a temporary holding ground for frequently used data subsets, much like a cache. Most of the high-speed calculations run directly on the Oracle RAC, which is fronted by a series of BEA WebLogic app servers that take in requests from a set of redundant load balancers sitting behind the company's customer-facing Apache Web servers. Sitting in front of each of the three layers are sets of redundant firewalls.

Cluster performance is key to running complex calculations in real time, but for Merlin, performance could never come at the expense of enterprise-level reliability, scalability, and 24/7 uptime, requirements that led to several crucial design decisions.

First, tightly coupled parallel processing via message passing was simply out of the question. Instead Merlin's architects and programmers put tremendous effort into dividing processes in an "embarrassingly parallel" fashion without any interdependencies at all. This benefits scalability and reliability, as the high-speed, low-latency communications required for interprocess communications create scalability bottlenecks. They also require cutting-edge interconnects such as Myrinet and InfiniBand, which don't have the reliability track record of Gigabit Ethernet.

"We didn't want some new interconnect driver crashing the system," Mohamed says, adding that straight Gigabit has also helped Merlin achieve considerable cost savings.

Reliability and enterprise-grade support fueled Merlin's decision to stick with an Oracle RAC, which has high-quality fault-tolerant fail-over features; dual-processor Dell PowerEdge servers; high-end EMC CLARiiON FC storage; and F5 load balancers.

"There are lots of funky platforms for HPC out there and high-bandwidth data storage solutions that can pump data at amazing rates," Mettke says. "The problem is that you end up dealing with lots of different vendors, some of whom can't deliver the 24/7 enterprise-level support you need. That adds another element of risk."

Finally, all code was written using Java, C++, and SQL.

"I've been on the other end running code written in Assembler on thousands of nodes," Mettke says. "We want the speed, but not at the expense of system crashes in the middle of a trading day. You can claim you have the best cluster out there, but it doesn't matter if there's no show when it's showtime."

Mettke adds that the architecture of Merlin's HPC infrastructure is constantly evolving to accommodate new data and applications.

Aeriongets HPC help

For organizations looking to get a cluster up and running quickly, enlisting the help of specialized Linux HPC hardware vendors such as Linux Networx and Verari Systems can cut down development time significantly. Not only do these companies sell and configure standard hardware, but they often have the expertise to deliver turnkey configurations with apps installed, tuned, and tested. Such was the case for Aerion, a small aeronautical engineering company that tapped Linux Networx to bring the upside of in-house HPC to its business of developing business jets.

Aerion, which works on the preliminary jet design process, relies on larger aerospace partners for design completion, as well as manufacturing and service. One of the company's projects, an early-stage design for a supersonic business jet, required particularly demanding CFD (computational fluid dynamics) analysis.

"In many commercial subsonic transport projects, you can develop different parts of the jet independently, then put all the pieces together and refine the design," says Aerion research engineer Andres Garzon. "But with supersonic jets, everything is so integrated and interactive that it's really impractical to develop each element apart from the others."

At the time, Aerion had been running commercial CFD software from Fluent on two separate dual-processor 3.06GHz Xeon Linux workstations. This setup worked well for analyzing diverse configurations and components and running Euler equations, which model airflow but leave out some essential fluid properties such as viscosity. "To really be accurate, you need to run the more complex Navier-Stokes calculations, which have many more terms to solve," Garzon says. And achieving the computing performance necessary to tackle that level of complexity meant turning to HPC.

Of course, small organizations such as Aerion don't always have the resources on hand to fly solo on HPC -- not to mention the fact that Aerion was also in the process of switching from Fluent to a series of powerful, free tools developed by NASA. So, when Garzon stumbled on a Linux Networx booth at an American Institute of Aeronautics and Astronautics meeting three years ago and the Linux Networx reps he spoke with offered to provide the hardware and much of the integration and testing work for the NASA apps Aerion wanted to use, Garzon took them up on the opportunity to get HPC up and running quickly.

Working with Linux Networx, Aerion configured an 8-node Linux Networx LS-P cluster of dual-processor AMD Opteron 246-based servers with 4GB per node, plus a ninth server to act as a master node. The NASA code requires a significant amount of complex message passing among parallel processes using the MPI, which usually requires a very high-speed, low-latency interconnect, such as InfiniBand or Myrinet. Because Aerion's budget was limited, Linux Networx offered to benchmark the apps with Myrinet, InfiniBand, and Gigabit Ethernet. Although performance under Myrinet and InfiniBand was superior (and roughly equivalent between the two), the overall difference was not dramatic enough to justify the expense. So, Linux Networx delivered a Gigabit Ethernet configuration, saving around $10,000, Garzon estimates.

As for storage, it is all local -- rather than SAN-based -- and is managed by the master node, which mirrors the OS and file system to the compute nodes. Thus, data is stored both on the local drives and the master node.

Linux Networx recompiled the NASA code -- which was originally developed to run on SGI machines -- for the Linux cluster. It also set up appropriate flags for the system and fine-tuned the cluster so that Aerion would be operational in a few days. Management is provided by Linux Networx Clusterworx, which monitors availability on the nodes, creates the image and payload for each node, and reprovisions nodes as necessary.

In all, Garzon found the process of bringing HPC in-house with the aid of Linux Networx to be relatively trouble-free and plans to expand the system to run additional cases simultaneously and to reduce compute time on time-sensitive calculations.

IBM Makes Your Linux Run

All Linux x86 applications will now run on IBM's Power processor-based System p servers.

The new capability dramatically expands the application availability for IBM's Power by enabling x86 Linux apps to run without modification through a new technology IBM is calling System p Application Virtual Environment (System p AVE).

"We have a lot of Linux on Power apps -- some 2,800 native ones -- but a lot of times when customers do a server consolidation, it's not just the main applications that need to run," Scott Handy, vice president of worldwide marketing and strategy for System p, told internetnews.com.

"We expect the main workloads will all fit in the 2,800 apps we already have, but it's all the other apps that need to work, too."

The way that p AVE works, according to Handy, is entirely seamless to end users. The system will automatically determine what is native and what is virtual without user intervention or setup.

So you could have an Apache Web server running as native Linux on Power, and then if you run some other Linux x86 binary, the OS would realize it's not native Power and would then pass it to p AVE, which would then run it in the p AVE environment.

Not all Linux applications should necessarily be run on p AVE, though. Handy noted there is a performance trade off with p AVE as opposed to running an application natively on Power.

"The performance characteristics will depend on the types of workloads. We expect Java to perform well, but with certain applications, the performance hit could be in the range of 10 percent," Handy said. "If the application is a heavily performance-oriented application, it's probably not the best candidate for p AVE."

For those applications that are resource hungry, Handy suggests that they should just take advantage of IBM's Chiphopper program and port their apps to Power. In Handy's view, it's not too difficult, as all that is required is a recompile of code targeted for Power.

IBM currently works with Novell and Red Hat for its Linux on Power efforts. According to Handy the companies have agreed to include their x86 libraries as part of their Power versions once the p AVE technology become generally available.

Currently IBM has had a private beta, which is now expanding into an open beta for system p users. The final full release is not expected until later this year.

The p AVE technology will not be a direct-revenue generator for IBM. Handy explained that IBM will not charge system p users extra for the technology but will instead just consider it to be part of the overall value proposition.

"It's really nice to just tell customers that no matter what the app is it'll work."

Michael Dell's Linux, the Death of BlackBerry, 3 More Dads and Grads Gift Ideas

We expect our communications devices to work. If there is an outage, we require the vendor let us know, tell us how long it will last, and provide some suggestions as to a workaround while it continues to go on. RIM did none of that. RIM's behavior came across like the company simply didn't care that it had left every single one of its customers high and dry.

I've been spending a lot of time talking to folks who want to try the Dell (Nasdaq: DELL) Latest News about Dell Linux Free 30-Day Trial. Seamlessly Integrate UNIX & Linux systems with Active Directory. solution. I'm getting the sense that we have all been looking at Linux on the desktop wrong, and that products like Linspire are simply wrongheaded.

Last week, Research In Motion (Nasdaq: RIMM) Latest News about Research In Motion may have actually destroyed the BlackBerry's growth opportunity in large enterprise and government. I wouldn't touch the platform with a 10-foot pole now, and I'll bet a number of my peers quickly come to the same conclusion.

Finally, and I'm having a lot of fun with this, I'll be suggesting another three things to buy for your dad or grad.

The Future of Dell and Linux

One of the problems with an established market -- like, say, the mainframe market a few decades ago -- is that emerging products and markets are filtered through the same set of core concepts. This leads to otherwise smart people either assuming failure or assuring it through bad decisions.

Michael Dell is so intent on making this work he has taken to carrying a high-performance Linux laptop himself from time to time. His runs Ubuntu 7.04, VMware, Open Office, Automatix2, Firefox and Evolution (to connect to e-mail).

It is very unusual for a CEO to take a personal interest in a project like this, and he has made it clear that he doesn't want anything to go out that he, personally, can't be proud of. That says a lot for how serious he is about this effort.

I've been talking to a number of people who are thinking of trying this Dell program both on a small and a large scale. I also observed the last time the company did this, and I know -- probably nearly as well as Dell does -- that Linux simply does not throw off the kind of lucrative secondary business Save 15% on Your Next Domain Purchase. Click Here. that Windows does on desktops.

That doesn't mean the problem can't be solved, but it does mean you have to solve it differently and look for ways to control costs up front.

Part of that has to be finding ways to leverage the Linux model and not fight it -- and the Linux model puts the responsibility for support on the Linux community and not the vendor selling the hardware.
Up to the Linux Community

This would take you down a completely different path than the one you take for Windows, where the support responsibility is shared between Microsoft (Nasdaq: MSFT) Latest News about Microsoft and the vendor, but the vendor carries much of the initial responsibility.

This suggests that Dell not preload Linux at all, but provide systems that can be imaged by their Linux customers -- configured with hardware that already has broad Linux driver support.

In effect, the hardware configurations would largely be dictated by what worked best with the greatest number of Linux distributions, and that would be based largely on feedback from the people currently using Linux.

In short, rather than building for performance or price leadership, the Linux systems would be built -- at least initially -- to be the best for the most Linux distributions, and much of the support would be passed on to the buyer after the fact.

Limited support would focus primarily on maintaining links to current resources, having up-to-date drivers, and providing advice on how to accomplish critical tasks so that Linux customers would see a sustained value from remaining connected to Dell.

I'm slowly coming around to the idea this will work, but I remain concerned that Linux supporters won't cut Dell enough slack so it can flesh out the solution. Remember, we've had decades to build the structure that supports Windows on the desktop, and we don't really have anything like it for Linux. As we've discussed, Linux can't simply slide into the Windows support structure, because it doesn't throw off enough profit and there is no "Microsoft" -- there isn't even a single version of "Linux."

In the end, the success or failure of this offering will come down to how helpful and tolerant the Linux supporters are. If they vocally cheer Dell on, come to Dell's defense as they would one of their own, and prevent the kind of nasty personal attacks that are all too common on the Web today, Dell will likely stick with this effort until it is successful. If they don't, Dell will grow tired of the pain and move on, and it will be years before another OEM (original equipment manufacturer) tries this again, if ever.
RIM Blows Off Customers and Makes Motorola's Day

Last week, RIM had one of the nastiest blackouts in the history of this technology segment. It was down for an extended period of time without warning or explanation.

On Friday, the company issued the lamest apology I've ever seen. In a techno-worded piece, RIM basically said it didn't adequately test a software update -- or its recovery Forge ahead and stay on budget with simple to install HP server technology. process. It didn't even try to address why the company didn't tell anyone anything for so long, but sort of said it would try to fix things.

RIM should have simply said that it really screwed up. It should have punished with extreme prejudice the folks who didn't test the update, who didn't ensure the recovery process, and who didn't tell customers what was going on in a timely manner -- this last actually being the most critical failing.

It should have done the Jet Blue thing of making it clear that the CEO was going to personally assure that this wouldn't happen again.

RIM's behavior came across like the company simply didn't care that it had left every single one of its customers high and dry, in what felt like the most incredibly stupid and arrogant misstep I've seen from any vendor in years.

We expect our communications devices to work. If there is an outage, we require the vendor let us know, tell us how long it will last, and provide some suggestions as to a workaround while it continues to go on. RIM did none of that.

The result was thousands of people wasting time trying to "fix" their Blackberries, calling their service providers and IT shops to get them fixed -- and calling them names when they couldn't -- and even throwing them on the ground in disgust when they wouldn't work. I wonder how many Blackberries were destroyed as a result of this fiasco.

RIM sells to large corporations and governments that have little tolerance for this kind of behavior from any vendor, and now likely view RIM as a problem to solve. Even the customers who were left high and dry are probably thinking it is time to make a change now, because who wants to depend on an undependable vendor?

This creates problems for RIM but a huge opportunity for Motorola (NYSE: MOT) Latest News about Motorola, which has the closest thing to a RIM solution in its Q and Good Technology solution. Good, which Motorola acquired a few months back, provides a service that's nearly identical to RIM, and the Q is actually more portable than most of RIM's current BlackBerry line.

I went from RIM to Good myself a number of months ago on a Q, and the transition was very easy and the result was as good -- and sometimes arguably better -- than what I got from my aging BlackBerry.

If Motorola and maybe even Palm (Nasdaq: PALM) Latest News about Palm (new hardware is expected soon from Palm) can capitalize on the concerns that RIM has created with its lack of customer respect, they could make a killing here.

In the end -- and this is the second time something like this has happened -- RIM has shown itself to be untrustworthy, and that should have broad implications for the company going forward. I sure couldn't recommend it to anyone now, and I imagine many others feel the same.
More Gifts for Dads and Grads

We'll go a little more affordable this week and look at three products: one that conceals your tech, one that anticipates the iPhone, and one a bargain for the PC gamer.

The first is Scottevest. This company makes a line of clothing with James Bond-like pockets that can easily conceal your electronic gear while still allowing you to listen to the music.

It offers shirts, pants, casual wear, and some really cool convertible jackets (I have one of these) that can be turned into vests. Prices range from around US$35 for T-Shirts, to $450 for leather dress jackets. Scottevest has a Tactical 4.0 system that is unlike anything else I've ever seen in the market. It's $340, but it gives you two jackets with seven different looks. For the Secret Agent in all of us.

The problem with iPod headsets and PC headsets is the wires. You can get tangled up -- and particularly if you use them on planes to listen to airline programming. The best wireless stereo headset I've found, and one that should be great with the new iPhone, is the Plantronics Ultimate Stereo Bluetooth Headset. You can find it for around $110 online if you shop around. It comes with a Bluetooth adapter that will work with an iPod, but it's better with a good music phone -- like an iPhone -- and it looks like desk art in its charging stand. Really cool.

For gamers, one of the biggest bargains in the segment is the Gateway (NYSE: GTW) FX 530. This product gives up a little on cool outside looks for kick-butt performance, and Gateway was really the first to fully make use of the overclocking capability in the new Intel (Nasdaq: INTC) Latest News about Intel Core 2 Extreme Edition processors.

At $1,200 it gives you a lot of power for what is a very reasonable price. It also has the Nvidia GeForce 8800, which is currently the fastest graphics card Latest News about graphics card on the market. I often don't say "bargain" and "gaming" in the same sentence, but I can here. If you like power more than flash, this is one hell of a nice system. Incredibly hot.

Tuesday, April 10, 2007

PMP does WiFi, downloads music, runs Linux

Flash memory and consumer device specialist SanDisk is shipping the first portable multimedia player (PMP) able to download music directly via WiFi, without the use of a PC. The "Sansa Connect," which runs embedded Linux and Mono, connects directly to online music services via WiFi, according to sources.

The Sansa Connect reportedly is the first iPod-style portable music player designed to interoperate directly with web services. Users can purchase and download music directly to the device, without having to boot up a PC, or transfer files via USB. The device also comes with a Flickr photo browser. The device also plays Internet radio.

The device sports a 2.2-inch color LCD screen, and a built-in 4GB hard drive for multimedia storage. It measures 3.58 x 2.05 x 0.63 inches and weighs 2.72 oz. Formats supported include MP3, WMA, and Protected WMA.

The Sansa Connect is initially available with support for Yahoo Music, an online service that costs $15/month, or $143.88 billed annually. A free 30-day trial is available to new Sansa Connect users through the end of 2008, according to Yahoo!

Ironically, the Yahoo Music service does appear to require the user to have access to a PC running 32-bit Windows XP or Vista, according to a blog post by Mono project leader Miguel de Icaza.

De Icaza earlier told LinuxDevices, "The entire interface [for the Sansa Connect] is written in C#, I believe it is a single process that implements everything. But I do not know more than that."

Availability

The Sansa Connect is available direct from SanDisk, priced at $250 -- $110 more than SanDisk's non-WiFi-enabled 4GB model, the e260.

Novell buys its way deeper into Linux

Network executives and industry analysts are applauding Novell's move to acquire SuSE Linux, saying that it signals a new beginning for Novell and a boost for open source.

But they also say there are challenges ahead as Novell, which built its business on its proprietary NetWare operating system, attempts to reinvent itself as an open source player.

"This brings Novell and the word excitement back into the same sentence. It brings them back onto the radar screen in places that they've largely ignored for years. . . . And it helps Linux find a center," says Dan Kusnetzky, vice president of systems software at IDC. "Now the trick is can Novell allow the Linux community to still be a community instead of trying to run it as a corporate entity?"

Novell initially underscored its commitment to open source earlier this year when it announced that all its network services would run on Linux. Since then, Novell also bought Linux desktop, management and collaboration software provider Ximian.

SuSE, which Novell last week announced plans to buy for $210 million, completes Novell's Linux picture by letting the company provide products that span the desktop, server and applications that sit on top of the server operating system, says Novell CEO Jack Messman. But the question is, how will Novell integrate its new Linux products with its flagship NetWare systems?

Novell has a spotty record when it comes to integrating acquired technology, analysts say. Its forays with WordPerfect and Unix in the early '90s were busts.

"Novell did a very poor job with UnixWare. They really screwed that up," says Bill Claybrook, an analyst with Aberdeen Group. "But I don't think they'll make the same mistake with Linux. It's not like Linux is their operating system, so they can't do whatever they want with it. They have to be cognizant of what open source demands of you as a vendor."

Still, some customers worry that Novell might be too aggressive when it comes to adding proprietary extensions to SuSE Linux and could dilute the open-source nature of the operating system.

Ross Vandegrift, a network administrator at Seitz Technical Products in Avondale, Pa., which runs NetWare and Linux, says he'll watch the merger closely.

"If Novell is wise in their moves, they will add on to core distributions as modularly as possible," he says. "History has shown that closely integrating proprietary extensions with free software has been a reliability nightmare. . . . So long as Novell sticks to using the published, well-known, widely available interfaces, I don't foresee a problem."

Jim Michael, IS manager for the city of Chesterfield, Mo., is concerned about just the opposite. He worries that Novell won't add enough proprietary extensions, resulting in Linux products that are less functional than traditional NetWare. He's worried Novell's Linux focus could "sound the final death knell of NetWare."

"I want to continue to use NetWare, but if all of Novell's development energy is going into Linux products, logic tells me that the NetWare development will be getting the short end of the stick and thus Novell's own applications running on NetWare will quickly start to suffer," he says.

Another hurdle in Novell's Linux plans is that SuSE and Ximian have overlapping products. Ximian has its GNOME desktop, while SuSE has KDE. Management products also overlap: SuSE's Autoyast, Ximian's Red Carpet and Novell's ZENworks.

Novell Vice Chairman Chris Stone says that while development efforts have been converged, no decisions have been made as to which products will remain.

"Right now it is in product development, and we need to do the integration work to figure out from a branding and naming point how that will work out," he says. "We will be the No.1 company in the [Linux] business over the coming years."

Stone conceded that development of NetWare services for Linux would favor SuSE first, before Red Hat or other Linux distributions.

Joe Poole, technical director at Boscov's Department Stores in Reading, Pa., runs SuSE Linux in his data center and sees the merger as good news, but says Novell's ultimate commitment to Linux will be what counts.

"Novell has got our attention," he says. "What they do over the next six months will tell us whether they're really on the comeback trail or just experimenting."

Dreaming in the "Cloud" with the XIOS web operating system

Xcerion is a Swedish Internet startup whose founders include ex-Microsoft employees Lou Perazzoli and John Connors. The company will make headlines later this year when they officially unveil what they call an "Internet OS" dubbed XIOS that runs in a web browser. We took an early look at the XIOS concept and had the chance to talk about the project with the company's CEO, Daniel Arthursson.

The "operating system" (more on the scare quotes later) is based on XML, and using AJAX it connects to multiple back-end servers running Ubuntu Linux. XIOS is not an applet or a plug-in. Instead, the "OS" is really a complex AJAX-based system, and Arthursson says that it can be viewed as a virtual machine for XML applications.

How does it work? After downloading a couple of megabytes of code, a user can "boot up" XIOS in a web browser and start running the OS and applications Xcerion is developing. Xcerion says that XIOS and its default applications will be free, and the applications themselves will be open-sourced so that users can modify them to suit their own needs. Furthermore, XIOS is a development platform that will allow coders to create their own applications, so it's not just limited to productivity applications.

Is this the start of a true "Internet OS"? It all depends on what you mean by "operating system," really.

The one big difference between Xcerion's solution and existing "OS-in-a-browser" projects like YouOS and EyeOS is that it can also run in offline mode. XIOS will keep a user's data intact and then sync all changes with the virtual hard drive residing on a back-end server the next time a connection is regained. "It is important here to note that since XIOS supports multiple virtual hard drives, including third party hard drives, enterprise and personal ones, the data may not only be stored in Xcerion's data centers, but also on your own home server or corporate network," said Arthursson. "This is something that many services on the Internet cannot provide today. This also extends the reliability of XIOS."

Of course, XIOS is not a full operating system, as the term is traditionally defined. It requires a host OS to boot up and launch a web browser before it can start operating. A more accurate phrase is perhaps "Cloud OS" because running it requires access to the "cloud," that is, a network of services and connections that exist on the Internet. However, according to Arthursson, "XIOS is an operating system running within the browser, which executes the application logic locally (not on the servers)." Clearly for Arthursson the major point here is that all necessary code is executed locally, and this approach should help offset one of the biggest problems of web-based "OSes," that of performance. It's what makes XIOS stand apart. The question is, can XIOS succeed where so many others have failed?
There's a reason why new OSes aren't launched every day

Making a new operating system is always an exciting prospect, for everyone from university computer science students to large companies like Microsoft—the latter's Singularity research OS contains many interesting ideas. However, every truly new consumer operating system suffers from a serious and inevitably fatal problem: a lack of applications. And when applications finally come to the new OS, they do not compare favorably to mature apps from the more established platforms.

It has been argued that this latter failing is greatly overstated, and that new applications can easily be written that can accommodate the majority of people's computing needs. The standard line trotted out on these occasions is that 80 percent of users only use 20 percent of an application's features, so all a new app has to do is implement that 20 percent and they can easily grab 80 percent market share. The problem with this argument, as explained masterfully by technology blogger Joel Spolsky, is that everyone uses a different 20 percent. Writers, for example, always need a word count, and that's one of the things that is left out of every new "lite" word processing application because it doesn't fall into the standard 20 percent. Other users have different needs.

Xcerion claims that their small stable of applications can provide "40 to 50 percent functionality" of users' needs, but even this optimistic estimate would suffer from the problem outlined above. The company also states that any other software needs can easily be filled in by open-source development. The problem with relying on open source to fill in the gaps for a new OS is that OSS development is not spread out evenly amongst all projects. A considerable amount of work goes into improving Open Office, for example, but much less effort is put into Abiword, still less on KOffice's KWord, and a tiny fraction of the development resources find their way to word processing applications on new operating systems such as SkyOS or EyeOS. Can Xcerion expect a different outcome? They'll need a lot of buzz, early and often. Growth in the online "office apps" arena hasn't been explosive, and Xcerion will have to contend not only with Microsoft, but the likes of Google as well.
A solution in search of a problem

Our primary objection to the idea of an "Internet OS" with productivity apps is that it isn't really solving any particular problem in a superior way. People who want to use a word processing program today have a plethora of options. They can purchase a copy of Word, or download a free copy of OpenOffice, or, in a pinch, even use a web-based document tool such as Google's Writely. Having a new OS and a new word processor running in a web browser doesn't improve this situation, and could make it significantly worse because of potential speed and latency issues. Competition is a good thing, but it is still unclear what need this truly meets, aside from answering the most common objection to online apps: what happens if you're offline?
If XIOS is embraced by the developer community, there's no end to what could be written for it. While XIOS will debut with a productivity suite and perhaps another application or two, XIOS "the OS" should not be confused with the applications themselves, which in the early days may amount to little more than proofs-of-concept. XIOS is an "operating system" in search of a killer app. Arthursson suggests that the portability of a user's workplace--accessible from any browser, anywhere--will be that killer app.

What will bring the developers to the table? This is perhaps Xcerion's most genuis move: XIOS was designed with monetization in mind, and developers will be able make money off their applications through user fees or advertising. It's "Software-as-a-Service" meets the proverbial lemonade stand, and Xcerion hopes that developers will come for the lemonade and stay for the money.

Debian GNU/Linux 4.0 "Etch" Released

The Debian Project is pleased to announce the official release of Debian GNU/Linux version 4.0, codenamed etch, after 21 months of constant development. Debian GNU/Linux is a free operating system which supports a total of eleven processor architectures and includes the GNOME, KDE, and Xfce desktop environments. It also features cryptographic software and compatibility with the FHS v2.3 and software developed for version 3.1 of the LSB.

Using a now fully integrated installation process, Debian GNU/Linux 4.0 comes with out-of-the-box support for encrypted partitions. This release introduces a newly developed graphical frontend to the installation system supporting scripts using composed characters and complex languages; the installation system for Debian GNU/Linux has now been translated to 58 languages.

Also beginning with Debian GNU/Linux 4.0, the package management system has been improved regarding security and efficiency. Secure APT allows the verification of the integrity of packages downloaded from a mirror. Updated package indices won't be downloaded in their entirety, but instead patched with smaller files containing only differences from earlier versions.

Debian GNU/Linux runs on computers ranging from palmtops and handheld systems to supercomputers, and on nearly everything in between. A total of eleven architectures are supported including: Sun SPARC (sparc), HP Alpha (alpha), Motorola/IBM PowerPC (powerpc), Intel IA-32 (i386) and IA-64 (ia64), HP PA-RISC (hppa), MIPS (mips, mipsel), ARM (arm), IBM S/390 (s390) and – newly introduced with Debian GNU/Linux 4.0 – AMD64 and Intel EM64T (amd64).

Debian GNU/Linux can be installed from various installation media such as DVDs, CDs, USB sticks and floppies, or from the network. GNOME is the default desktop environment and is contained on the first CD. The K Desktop Environment (KDE) and the Xfce desktop can be installed through two new alternative CD images. Also newly available with Debian GNU/Linux 4.0 are multi-arch CDs and DVDs supporting installation of multiple architectures from a single disc.

Debian GNU/Linux can be downloaded right now via BitTorrent (the recommended way), jigdo or HTTP; see Debian GNU/Linux on CDs for further information. It will soon be available on DVD and CD-ROM from numerous vendors, too.

This release includes a number of updated software packages, such as the K Desktop Environment 3.5.5a (KDE), an updated version of the GNOME desktop environment 2.14, the Xfce 4.4 desktop environment, the GNUstep desktop 5.2, X.Org 7.1, OpenOffice.org 2.0.4a, GIMP 2.2.13, Iceweasel (an unbranded version of Mozilla Firefox 2.0.0.3), Icedove (an unbranded version of Mozilla Thunderbird 1.5), Iceape (an unbranded version of Mozilla Seamonkey 1.0.8), PostgreSQL 8.1.8, MySQL 5.0.32, GNU Compiler Collection 4.1.1, Linux kernel version 2.6.18, Apache 2.2.3, Samba 3.0.24, Python 2.4.4 and 2.5, Perl 5.8.8, PHP 4.4.4 and 5.2.0, Asterisk 1.2.13, and more than 18,000 other ready to use software packages.

Upgrades to Debian GNU/Linux 4.0 from the previous release, Debian GNU/Linux 3.1 codenamed sarge, are automatically handled by the aptitude package management tool for most configurations, and to a certain degree also by the apt-get package management tool. As always, Debian GNU/Linux systems can be upgraded quite painlessly, in place, without any forced downtime, but it is strongly recommended to read the release notes for possible issues. For detailed instructions about installing and upgrading Debian GNU/Linux, please see the release notes. Please note that the release notes will be further improved and translated to additional languages in the coming weeks.

About Debian
Debian GNU/Linux is a free operating system, developed by more than a thousand volunteers from all over the world who collaborate via the Internet. Debian's dedication to Free Software, its non-profit nature, and its open development model make it unique among GNU/Linux distributions.

The Debian project's key strengths are its volunteer base, its dedication to the Debian Social Contract, and its commitment to provide the best operating system possible. Debian 4.0 is another important step in that direction.