Ubuntu DVD Burners
|Nine DVD/CD burning tools for Ubuntu Linux...|
There is no dearth of good CD/DVD burning tools for Ubuntu. Brasero Disc Burner comes as default in Ubuntu and it is a good enough tool with almost all functionalities you expect from a basic CD/DVD burning application. But what really are the alternatives. Here is a quick listing of very good CD/DVD burning applications available for Linux(in no particular order). read more...
|permapage | score:9197 | -Ray, April 2, 2011|
The Virtual Private Nightmare: VPN
|Maybe the 'P' really stands for Public...|
Here's a question: What's the number 1 vector for security outbreaks today? Given the title of the article we hope you answered Virtual Private Networks (VPNs). Today's convenient world of mobile access to critical applications and information has come with a hefty burden for the world's already overburdened security teams.(and here are some nightmare prints) read more...
|mail this link | permapage | score:9188 | -Ray, August 4, 2004 (Updated: April 24, 2012)|
Build a budget Linux cluster
|How to build a cheap cluster using generic parts and open source software...|
DRBL on its own can create a set of machines that could be used as thin clients in, say, a classroom setting. By adding the Condor clustering software we turn this set of machines into a computing cluster that can perform high-throughput scientific computation on a large scale. You can submit serial or parallel computing jobs on the server, and Condor takes care of distributing the jobs to idle cluster machines, if any, or putting them in a queue until the required resources are available. Condor can also perform periodic checkpoints on the jobs and restart them if something causes the machine on which they are running to reboot. read more...
|mail this link | permapage | score:9186 | -Ray, November 17, 2005|
Space Tyrant: A threaded game server project in C
|[Update, June 25, 2005: A Space Tyrant home page has been created as a central index to the various ST articles, links, and files.]|
[Update, March 21, 2007: A Space Tyrant has its own website! It's small but growing and will provide quick access to the latest code and developments in the ST universe.]
Space Tyrant: Today we kick off a new multithreaded, network socket programming project which we will call Space Tyrant. Our mission is to write an open source, multiplayer, networked, strategy game in the C programming language. The goal of this project is to make a solid code base which implements a simple space trading game upon which other games can then be built. The game will be a subset of The Last Resort (TLR) that currently runs at [offline]. This project will be a learning exercise for me as well as for any interested readers. The current state of the source code will be released with each article update.
The game design: While my TLR game consists of over 25,000 lines of C source code and supports a web interface as well as telnet and a graphical client, this code will be far smaller and simpler. It will initially only support telnet and will implement a far simpler game design.
Players will be able to telnet into the game, create an account, and play in a universe that contains ports, planets, as well as other players. Each player will be issued a starship, some cargo holds, and an amount of starship fuel. Additional fuel will be issued hourly and will accumulate in the starship. Fuel will be used to move the ship between sectors -- locations within the game universe -- and to dock with ports. Once a ship runs out of fuel it can't move at all until new fuel is issued.
Players will be able to buy and sell commodities (Iron, alcohol, and hardware) between the three different kinds of ports. Each port type will sell one of the three commodities and buy the other two. Prices will be based on supply and demand with rarely-used ports offering the better prices.
With the money players earn trading they will be able to buy more cargo holds to make their ships more efficient for trading. They will also be able to buy fighters -- small military drones -- that can be used to attack other ships or deployed to guard a sector and its contents. The fighters carried with a ship will guard it against attacks from other players.
Games will run for a predetermined length of time, then reset and start anew.
The programming model: Now, on to the software design. I've compared and considered various models for the server design. TLR is based on the forking model using inetd or xinetd to handle the listening and forking. While the forking model is inherently distributable to multiple processors, it introduces inefficiencies (forking multiple processes) and makes interprocess communications more difficult and slower.
Next, I considered a non-blocking, single process model. In this approach, one process handles everything in a single thread. It would use non-blocking IO (read and write functions that never wait for completion but, rather, return immediately if they aren't ready to read or write actual data). The thttpd web server is an example of a non-blocking, single process server. It's extremely fast and efficient. However, this model is quite complicated to code, and, I believe would make it more likely to introduce subtle timing bugs.
Next, I considered a pure multithreaded, single process model with a thread for each player. While appealing in many ways, this model would require the same kind of coordination between threads that the forking model requires between processes. Such interprocess communication would be simplified in that the various threads share memory, but the coordination issues otherwise remain the same.
Last, I considered another multithreaded model, this time with only IO threads for each user and a single thread that implements all game logic. While that one central thread might someday be a bottleneck that limits scalability on large SMP systems, it does distribute the IO on any additional processors that might be present, and requires minimal coordination. In short, this model combines the logic simplicity of the non-blocking single process model with the coding simplicity of the threaded model, while separating the IO from the main logic. There will also be two other simple threads in this model. There will be a thread that listens for new connections and spawns the IO threads for each new connection. There will also be a thread that writes the data to disk periodically.
This is the approach that I intend to take for this project. The code will be written for both Linux and Mac OS X.
More info: I have set up an email address for programmers following this series to provide recommendations, bug reports, and other feedback. Send email about this project to spacetyrant [at] librenix.com.
|mail this link | permapage | score:9184 | -Ray, March 18, 2005 (Updated: July 26, 2008)|
Scripting: Put a clock in your bash terminal
|In the original version, the cursor positioning didn't work on my Mac OS X system. If that happens to you, try this simplified variant: |
#!/bin/bashAlso, note that you'll need to run either script in the background to use your terminal.
tput cup 0 60
echo -en `date +"%H:%M:%S %F"`
The script saves the current cursor position with an ANSI escape sequence instruction. Then, using the tput command, the cursor is sent to row 0 (the top of the screen) and the last column minus 19 characters (19 is the length of HH:MM:SS YYYY-MM-DD). The formatted date command is displayed in green inverted color. The cursor is then sent back to its original position with another ANSI sequence that restores the original saved position. read more...
|mail this link | permapage | score:9178 | -Ray, January 22, 2008|
Microsoft to push unlicensed users to Linux
|Microsoft has long had a conflict of interest about software piracy. By pretending not to notice, they encouraged the use of unlicensed Microsoft software, thereby letting dependence on their formats, packages, and protocols grow. The time is approaching when that will change.|
Microsoft has historically made much noise and took little action against unlicensed users of its software. In the case of some developing countries, the reason was obvious. Let them develop a US-style de facto Microsoft business standard and they then become owned by Microsoft.
We've all seen Windows users circulate simple text-only notes in Microsoft Word .doc files. While it may be annoying to those without .doc capabilities (including users of older versions of Word, itself), it is a beautiful thing from Microsoft's point of view. It perpetuates their monopoly while forcing upgrades among the faithful, all in the same simple act. The widespread use of proprietary formats tends to lead to even more use of those same formats.
However, as Microsoft's markets in the US approach the saturation point -- and start to recede -- they are faced with a dilemma. Do they try desperately to hold on to as much market share as possible, or do they cash in while accepting -- and accelerating -- the inevitable decline in share?
I think Microsoft will be increasingly choosing the 'cash in' option as the pressure rises to keep earnings high. The first victims of this gradual policy shift will be business and government users in developed countries with strong IP protection laws.
Next, in approximate order, comes consumers in developed countries and business / government users in rapidly developing countries -- especially those countries seeking easy access to western markets. Last to pay up will be students and consumers in the poorest developing countries.
But, for all of you still getting a free ride from Microsoft, the good times will inevitably come to an end. They are simply waiting until you, and your compatriots, are too invested in the knowlege, skills, and standards of Microsoft products to quit. Then, they will charge you.
If you are an unlicensed Windows user who can't afford to someday become a profit center in the vast Microsoft empire, you should consider the alternatives. I recommend you start by downloading and burning a live Linux CD of Knoppix, booting it up on your Windows box, and trying it out. It's free and since it runs straight from the CD, you don't need to install it on your hard drive.
|mail this link | permapage | score:9176 | -Ray, August 1, 2005|
Scripting: A parallel Linux backup script
|This example bash shell script demonstrates a simple method of creating backups of multiple filesystems to multiple tape devices simultaneously. While the script presented writes to four tape drives in parallel, it can easily be modified to write to other device types and to create a different number of backup streams. The script is set up for the bash shell under Linux, but modifying it for another variety of Unix should simply be a matter of changing the locations of utility files such as tar, echo, cp, and sleep. |
The script can be downloaded from http://librenix.com/scripts/par.tar.sh. Download the file now and load it into an editor as this article will refer to it frequently. Also, you may want to modify bits of it to match your filesystem names and your devices.
The first line of the script looks like this:
#!/bin/bashIf the bash shell isn’t in the /bin directory on your system, you’ll need to modify this line. Enter the command which bash now to verify the location of bash. My Fedora Linux system and my Mac OS X system both have bash in /bin, but my FreeBSD system does not. If you have a non-Linux flavor of Unix, you’ll probably need to use the ‘which’ command to verify the locations of each command used in the script. The commands used are:
bashNote that ‘wait’ and ‘cd’ are usually implemented as internal shell commands and may not have external commands associated with them. If that is true for your system, leave ‘cd’ and ‘wait’ with no directory prefix just as they are in the original script.
Now, the first command in the script resets the current working directory to ‘/’:
cd /Since the script precedes each directory to be backed up with a ‘.’ to represent the current working directory, starting out at ‘/’ is necessary. The reason for this precaution is that some implementations of the tar command will only load files from a tar archive into the exact directory that was specified when the file was backed up. By prefixing the names with a ‘.’ we preserve the ability to recover the files into any subdirectory we want, without overwriting the original files.
Immediately after the ‘cd /’ command is where you would put any commands to shut down all services that must be quieted prior to a backup. The example script has a (commented out) command to initiate an Oracle database shutdown followed by a ‘sleep’ command to allow time for the shutdown to complete. The example database shutdown and the following delay probably don’t apply to your system. Obviously, you’ll have to add commands yourself to stop any applications that might interfere with the backup.
Next, we use the ‘date’ command to create two sets of four tiny files to stick at the start and end of each tape. Note that the presence of a ‘date.#’ file at the beginning of each tape lets you quickly find out when a tape was created and on which drive. The ‘zzzz.#’ files, appended to the end of each tape, only serve to let you easily verify that a backup completed without overrunning the end of the tape.
Next, we start the four actual ‘tar’ backup commands, each with sample directories named ‘./dir1’, ‘./dir2’, etc. Of course, you’ll need to modify the list of directories to match the actual directories you wish to back up. Note that you’ll probably want to balance the directory sizes so that all of the largest directores aren’t on the same tape. Also, note that each ‘tar’ command is run in the background and logs to a tar.#.log file in the /tmp directory. Obviously, you might want to put the logfiles somewhere else.
After each ‘tar’ command there is an entry like this: ‘TASK=$0’, or ‘TASK=$1’. These arbitrarily-named ‘TASK’ variables are used to store the process ID of each ‘tar’ command so that the script can wait for them with the four ‘wait’ commands that follow in the next block of code. There, we have the four ‘wait’ commands waiting on the $TASK0, etc, variables. (The addition of the ‘$’ to each TASK# shell variable is not a typo -- it’s necessary to read back the contents of the variable.)
Next, after the script has waited for the completion of each of the four ‘tar’ commands, it appends some information to a history file for later reference. It stores the date of the backup, the filesize of the logfile, and the number of files backed up on each tape to each of four history files. While the script will overwrite the logfiles (tar.#.log) each time it is run, it will append these three lines to each of the four history files (tar.#.history).
The final steps in the script are commented out. Those are the commands necessary to restart any applications that were brought down for the backup. Again, in the example we assume an Oracle database needs to be restarted. You’ll need to add the commands necessary to start any applications that were stopped at the beginning of the script.
|mail this link | permapage | score:9172 | -Ray, April 10, 2005|
Linux vs. Windows: Why Linux will win
|One of the oft-mentioned weaknesses of Linux, fragmentation, just happens to be one of its greatest strengths. A broad range of choices in an immature market is a good thing. Of course, choice does come at a cost. For example, there may be no standard way to do a particular task. Further, development resources will sometimes be split among two or more projects. However, these are weaknesses in the short term only. |
One could similarly argue that evolution of species suffers from the same 'weakness' of fragmentation. However, in the long term, the survival and consolidation of the best traits results in an improved breed. Eventually, one of the many approaches to some desktop task will rise to dominance and show the market the right way to do it, and, at the same time, reduce the fragmentation problem.
Based on my observations, business continuity considerations are starting to place more emphasis on portable data formats and protocols. Relational databases, contrasted with the counter examples provided by Microsoft formats, are helping to raise awareness of the value of portable data.
For a private business to blithely entrust their data to proprietary formats and protocols is irresponsible at best. For a public company to do so can be looked upon as a breach of the shareholder's trust; an unnecessary liability. It's quietly overlooked now partly because of the ubiquity of the practice and partly because no Microsoft-dependent organization wants to point out a liability from which they also suffer. This situation will change with growing awareness of the problem and as the Linux-plus-free-applications option makes vendor lock-in increasingly harder to justify. The time is coming when the stock market will recognize and reward data independence among public companies.
Linux is entrenched in the server world. That provides a huge opportunity to expand into more and larger server niches. It also provides a small contributing stream of desktop users in influential places.
Major market shifts, when limited by ingrained attitudes, are generational. It takes the replacement of one generation by the next for a market to complete such a transition. Even after Linux comes to dominate in new installations, there will naturally be Windows holdouts for many years, in both homes and organizations. This diehard tenacity is not an unexpected sign of strength, but it will be interpreted as such by a certain class of industry analysts for many years.
The IT industry has an inertia that is almost unimaginable to someone who hasn't spent significant time immersed in it. Application systems built on one operating system or architecture are extremely expensive to port to an unrelated OS or architecture. While this effect does slow the uptake of Linux in business, it also prevents a sudden loss of the Linux market share. But, mostly, it masks the rise of Linux so that it is possible for much of the IT industry to simply ignore its growth. I think that this effect of the slow and gradual adoption of Linux is the main support for the ‘Windows has won the desktop’ analysts’ arguments.
Linux is free as in ‘free beer’. Yes, you can buy it -- and many do -- but when you pay money for Linux you are really buying something else: support, non-free components, and convenience, to name a few. The reality, however, is that Linux is as free as you need it to be.
Linux is also open. It can be extended, embedded, and used as needed without restrictive licenses and without fear of vendor lock-in. This characteristic of Linux can only improve Linux’s profile with each business continuity study and proprietary counter example. The significant restrictions of Linux’s license, the GNU General Public License (GPL), establish rules of redistribution, not limitations of use.
Linux is also scalable -- but just what does that mean? Scalability runs in several directions. To say an OS is scalable doesn't simply mean that it scales to very large systems. Rather, scale refers to the entire range from the very small to the very large. It refers not just to the vertical dimension but also to the horizontal, across arrays of clustered systems. On this measure, Linux truly excels. Linux powers an amazing range of systems, from tiny devices to supercomputers. Of the several operating systems that scale to very large systems, Linux seems to be the one destined to own the small end of the size spectrum.
Security may be even more important than scalability to the IT industry. Security concerns are also gaining mind share among home users as identity theft becomes more widespread. SELinux is beginning to be integrated into major Linux distributions -- which will expand the number of security-conscious IT shops that can deploy it. At the same time, Windows has spawned a healthy industry dedicated to screening out viruses and worms.
I believe that Microsoft's practice of neglecting security is one of the biggest reasons for Firefox's phenomenal success, just as it is steadily contributing to Linux’s growth.
Meanwhile, there are a few features that many Linux distributions are still missing out of the box. As each of those areas is addressed, end-user Linux adoption will increase. As this process adds to the size of the Linux installed base, the newly enlarged base will increase the value of solving other such problems, continuing to fuel the positive feedback loop. As Linux reaches ‘critical mass’, almost all of the other arguments against Linux will fall, one by one. For example, when major vendors start offering preconfigured Linux systems to home desktop users, one of the most persistent complaints against Linux, that ‘it is hard to install’, will become irrelevant. As many readers surely realize, Windows is difficult to install as well. The difference is that users generally don't have to install Windows. It comes preinstalled, and with a preconfigured 'restore' CD. The implication of this is that as Linux approaches critical mass, its period of fastest growth may still lie ahead!
Meanwhile, Microsoft's desktop network effect advantage is weakening due to cross platform software packages such as Firefox, OpenOffice, and, for programmers, gcc.
Even some game makers could conceivably abandon Windows by releasing custom Linux LiveCD versions of their games. Granted, there might need to be some embedded graphics support, but this need not be an insurmountable problem since many games only support a limited number of graphics adapters anyway.
Linux has a certain ‘coolness factor’ that appeals disproportionately to young people. Further, Linux is strongest among the technological elite, i.e., those who help and advise others, run websites, write code, and generally set technology trends. This slice of the market is more important to the future than their numbers suggest
Microsoft has, as they say in politics, ‘high negatives’. That is, a substantial percentage of people very much dislike Microsoft. These people will go to considerable efforts to avoid buying or using Microsoft products as alternative products become more visible.
Capitalism, like open source, is relentless and efficiency based. A central planner can never fully predict a market's evolution -- yet capitalism moves in lockstep with it. In much the same way, various Linux distributions will be born and die as desktop evolution relentlessly marches on. Even the current 'Linus' branch of the kernel can and will be replaced (forked) if it doesn't follow the main market closely enough. The ‘planned economy’ of Microsoft is at a disadvantage when facing the evolutionary dynamics of the laissez faire open source bazaar.
Compounding the problem for Microsoft, Linux is poised and ready to pounce upon any new, Windows-incompatible, hardware platform; perhaps IBM’s upcoming cell processor will be the next Linux success story. Linux runs on almost everything and gets quickly ported to new hardware. Linux is agile, Microsoft is not.
Microsoft's biggest remaining asset is probably the vendor lock-in ‘feature’ of Microsoft Office. Of course, that lock-in is also one of the biggest reasons not to use Microsoft Office. As free office suites achieve acceptable levels of command, feature, and file compatibility with MS Office, more and more user’s desktops will become available to Linux. Microsoft will, as always, try to leverage their current lock-in into future lock-in. But with the pace of office software development slowing as the market nears saturation, that is easier said than done. Changing Office to render a competitor incompatible will also hinder older versions of Office, creating more ill will. Also, if a competitor ever does achieve close compatibility with the current version of Office, customers will have the option of jumping to the competitor if Microsoft changes the file formats. With bad timing or a bit of bad luck, such a lock-in maneuver by Microsoft runs the risk of hastening the abandonment of Office.
Microsoft has always shrewdly leveraged their network effect and mind share advantage to maintain themselves and grow. They will continue to use this strength -- but they face many hazards. They must correctly identify the real threats early enough to fight and nullify them. Microsoft can win many battles and still lose the war. They simply can't win all the battles and yet their relentless adversary, Linux, can lose battles indefinitely and still come back to win the war. Unfortunately for Microsoft, ‘Linux’ doesn’t need to make a profit and can’t be put out of business by an upside down balance sheet.
Linux does, however, have one looming vulnerability. Microsoft could possibly kill Linux with some unwitting help from the Linux kernel team or the open source applications development community. Governments, through trademark, copyright, and patent law, wield such power over common business practices that runaway software patents -- like those now being issued in the US -- could kill off commercial Linux use and support in affected countries. For example, heavy participation in a scenario such as this one could lead to a near-death experience for Linux. This scenario, though, is best classified as a government action. Linux has already penetrated so many niches that the chances of Microsoft rooting it out via market mechanisms seem pretty slim.
And, no, Linux isn't yet ready for every desktop that Windows occupies. However, it wasn't long ago that Linux wasn't ready for many server roles either. The server situation has changed drastically just as the desktop situation is now changing. The desktop will change more slowly since it is not transparent to the user, but similar forces are pushing it inexorably forward. Each year new niches are added to the Linux desktop installed base and other, more established, niches grow. With each such increment of desktop growth, another marginal niche becomes viable. A few more years of this growth and the big market niches will gradually go from inaccessible to marginal to viable to dominated. No, Linux can’t yet replace Windows, but time is on Linux’s side.
Meanwhile, if you’re impatient, you can help to speed things up. Help a friend install Firefox or OpenOffice.org. Give a Windows user a Knoppix CD to play with or install a Desktop Linux distribution on their 'old' machine and show them a software repository full of nice, friendly, and free binary applications. If you’re a programmer, find an open source project that interests you and lend a hand.
|mail this link | permapage | score:9154 | -Ray, May 8, 2005 (Updated: May 13, 2005)|
|Cool tips and tricks for SSH, including X forwarding, (s)ftp, remote filesystem mounting, and an SSH SOCKS function...|
Most Linux users already know the bare basics of using OpenSSH. You use ssh to get a secure shell into a remote system, sftp for Secure FTP, and scp for copying files. All well and good. read more...
But OpenSSH can do quite a bit more than many users realize. Let's take a look at some of the things you can do with OpenSSH and associated tools.
|permapage | score:9146 | -Ray, January 20, 2011|
xmldiff: Create XML patch files
|If you support a lot of XML this tool may be for you...|
Xmldiff is a tool that can show you the differences between two XML files, taking into account changes that are purely syntax or are not significant according to the XML specification. One of the patch formats that Xmldiff can generate is an XUpdate XML document that succinctly describes the changes between two XML files. read more...
In this article, I'll use xmldiff to generate an XUpdate patch, and the Perl module XML::XUpdate::LibXML to apply this patch to an XML file.
|mail this link | permapage | score:9142 | -Ray, June 5, 2008|
Free versions of Arial, Courier New, and Times New Roman fonts
|Red Hat releases free replacements for Windows core fonts...|
Available for immediate download, the Liberation fonts are intended to let users share documents between free operating systems and Windows without involuntarily reformatting the documents because the fonts don't match. The Liberation fonts are designed to be metrically equivalent to the Windows core fonts, with each letter occupying the same horizontal space as its equivalent in a proprietary font. read more...
Red Hat has a long history of interest in high-quality fonts that allow interoperability between operating systems.
|mail this link | permapage | score:9141 | -Ray, May 19, 2007 (Updated: May 20, 2007)|
Python Client/Server Tutorial
|A tiny Python tutorial...|
This application can easily be coded in Python with performance levels of thousands of transactions per second on a desktop PC. Simple sample programs for the server and client sides are listed below, with discussions following read more...
|permapage | score:9134 | -Ray, June 22, 2009|
Tutorial: Setup Linux iSCSI SAN
|Linux target framework (tgt) aims to simplify various SCSI target driver (iSCSI, Fibre Channel, SRP, etc) creation and maintenance. The key goals are the clean integration into the scsi-mid layer and implementing a great portion of tgt in user space.|
The developer of IET is also helping to develop Linux SCSI target framework (stgt) which looks like it might lead to an iSCSI target implementation with an upstream kernel component. iSCSI Target can be useful:
a] To setup stateless server / client (used in diskless setups).
b] Share disks and tape drives with remote client over LAN, Wan or the Internet.
c] Setup SAN - Storage array.
d] To setup loadbalanced webcluser using cluster aware Linux file system etc.
In this tutorial you will learn how to have a fully functional Linux iSCSI SAN using tgt framework. read more...
|mail this link | permapage | score:9133 | -nixcraft, November 14, 2008|
Apple to Intel move no threat to Linux
|John C. Dvorak's recent Marketwatch commentary, 'Linux is likely the big loser', is completely off base. His fundamental mistake is to assume that 'the X86 platform' is more appealing than the freedom of Open Source and that the x86 processor is the important consideration for development: |
It's likely that developer interest will wane when Apple is fully engaged on the X86 platform. While Apple ran on the PowerPC chip the amount of developer effort in the Open Source camps was nil. But now that Apple is using the same processor as everyone else, targeting the Macs will now be an easy decision to make. This will be at the expense of Linux.No, the Apple announcement doesn't mean that you'll be able to run OS X on a Dell. In the unlikely event that the Intel-based Macs are insufficiently different from PC's, Apple will build in additional hardware security features. Mac OS X will check for these features and will refuse to run in their absence.
Realistically, Apple will not make generic PC’s nor will the upcoming Intel version of Mac OS X run on non-Apple hardware. The new Apples will be just as proprietary as the PowerPC-based Apple hardware -- and just as distinct from the generic 'PC market'.
Above all, Apple is still a hardware company and a switch to commodity hardware -- or even making their new computers PC compatible -- would be a far more dangerous business risk than simply switching CPU architectures. Apple is not changing their business plan. They are changing their processor architecture and supplier only.
Try as they might, even Microsoft can’t stop Linux. And Apple isn't even trying.
The Apple switch to Intel processors is quite simply irrelevant to Linux.
Meanwhile, Apple's move makes sense from a market perspective. And it's not about clock speed or raw performance as some have suggested -- although those considerations are important. The obvious reason that rules out performance as the overriding consideration is simply that they didn’t choose the AMD Opteron for the Power Mac.
Rather, it appears that this strategic shift is (almost) all about laptops (and perhaps Mac Minis -- which technologically are just battery-less laptops in a new form factor). Laptops are now outselling desktops -- and that trend will increasingly drive hardware makers' profits. And at Apple, that trend may be even more important than in the general PC market. If this were about performance and price/performance at the high end, the partner would be AMD. This move is primarily about power-per-watt at the low end, hence, Intel.
Of course, we mustn't forget that the high-end systems will be migrated last, possibly more than two years from now. That gives Apple plenty of time to add a second partner if Intel's vast resources are unable to rein in AMD on the performance front. And, obviously, a move to AMD at that point would be a small technical task compared to the PowerPC-to-Intel switch.
With some luck and continuing success at the high-end, AMD could still get a major design win out of this transition.
|mail this link | permapage | score:9133 | -Ray, June 9, 2005 (Updated: August 1, 2005)|
MiniLesson: An introduction to Linux in ten commands
|This tutorial is the first in a series of introductory Linux lessons. This first article will cover navigating around a Linux filesystem along with a brief passage -- with examples -- on using ten of the most essential GNU/Linux commands. |
You should have access to a Linux system in order to perform the example commands as we progress through the tutorial. If you don't have a dedicated Linux box, you can use a Live Linux CD-ROM-based distribution such as Knoppix. Knoppix will let you run Linux directly from the CD without modifying anything on your hard drive.
Once you're logged in to a Linux system, open a terminal session. Each of the commands covered here will be typed directly into a command line terminal window. Under Red Hat Linux, terminal is found in the 'system tools' section of the menu. (Your system may, alternatively, have a terminal program called 'konsole', 'xterm', or 'shell'. Look around your system for a menu with 'tools' or 'utilities' in the name if necessary.)
The first command we will use is 'pwd' -- which stands for 'print working directory'. The pwd command shows you your current position within the Linux filesystem. The position is known as your 'current working directory'. Type pwd now. The example below shows my command prompt and the pwd command followed by the output from the pwd command:
[rayy@barton0 rayy]$ pwdFrom the output (/home/rayy) we can tell that I am in my 'home directory' -- the directory where I keep my personal files and the directory where I always start out in a new session.
The ls command lets you list files. For example, here is the (shortened) output of an ls command on my system:
[rayy@barton0 code]$ lsAlternatively, you can get a 'long listing' that shows file sizes, timestamp, ownership, and permissions as follows:
artdir countdir machine
[rayy@barton0 code]$ ls -lYou can also supply a target directory to the ls command. For example, to view the contents of the /tmp directory, I enter the following:
drwxr-xr-x 2 rayy rayy 4096 Feb 3 2002 artdir
drwxr-xr-x 2 rayy rayy 4096 Feb 3 2002 countdir
drwxr-xr-x 2 rayy rayy 4096 Feb 3 2002 machine
drwxr-xr-x 2 rayy rayy 4096 Feb 3 2002 sortdir
drwxr-xr-x 2 rayy rayy 4096 Feb 3 2002 tsardir
[rayy@barton0 code]$ ls /tmpFor more information on the ls command you can reference the manual page for ls with the following command:
flp kde-rayy mcop-rayy
[rayy@barton0 code]$ man lscd
This next command, 'cd', lets you change your current working directory. for example, you can change your current working directory to /usr/bin by entering the following command:
[rayy@barton0 rayy]$ cd /usr/binNote that after I entered the cd command, my command prompt changed to reflect the change in the last node of my current working directory. Your command prompt may not be configured to do that.
Change your current working directory to /usr/bin now and enter the ls command.
[rayy@barton0 code]$ cd /usr/binThe preceding is a partial listing. There are many, many files in the /usr/bin directory on most Linux systems.
[rayy@barton0 bin]$ ls
If you have a background in Windows or are familiar with DOS, you are used to file extensions that signify the file type. Linux (and Unix) have no such requirement. That is, an executable program can be named anything. Therefore, a handy command is supplied with Linux named 'file'. For example, I have a file named 'sample.c' in my code directory. I can learn a bit about that file by entering the following command:
[rayy@barton0 code]$ file sample.cAlternatively, I can use the '*' wildcard -- which represents all filenames -- to examine all of my code files at once. The following is a shortened example:
sample.c: C++ program text
[rayy@barton0 code]$ file *The file command can be very useful to avoid minor annoyances -- such as when using one of the following three commands.
code.tar: GNU tar archive
genart.c: ASCII C program text
sample.c: C++ program text
xor: ELF 32-bit LSB executable
The cat command is useful for concatenating multiple files -- or just for dumping a single text file to the screen. Before you use the cat command to dump a file to the screen, use the file command to make sure it's some variety of text file such as ascii text, commands/text, C source code, html/text, etc. The following is a shortened example of using file and cat to identify and dump a text file:
[rayy@barton0 code]$ file xor.cmore
xor.c: ASCII C program text
[rayy@barton0 code]$ cat xor.c
unsigned char buff,
The more command is useful when a text file is larger than a single screen. The following is a shortened example of using more to view a large C program:
[rayy@barton0 code]$ more xor.cNote the '--More--(29%)' at the end of the screen. That means that 29% of the file is above that line, implying that another 71% of the file is below. Press the space bar to page through the file, a screenful at a time. Press the b key to back up. If you finish looking before reaching the end of the file, press the q key to quit.
unsigned char buff,
The grep command, short for 'get regular expression and print', is useful for finding occurances of a particular string in a text file. To find the 'printf' statements in the example C program above, enter the following command:
[rayy@barton0 code]$ grep printf xor.cThe grep command has far more capability than I describe here and, as usual, enter
[rayy@barton0 code]$ man grep for more information.
The cp command will let you copy files. Unlike the commands used above, this one includes a hazard; if you copy filename1 to filename2 and filename2 already exists, you will destroy the original filename2 file. Use cp with caution!
To make a duplicate copy of my xor.c file I could enter the following command:
[rayy@barton0 code]$ cp xor.c xor.c.bakNote that the cp command returned no output -- I had to enter an ls command to see the results of the copy. [By adding the * wildcard to the original filename, I asked for a listing of all files that started with xor.c -- including those with no additional characters in the name.]
[rayy@barton0 code]$ ls xor.c*
The rm command is used for removing files. To remove the duplicate file I created in the cp command example, I would enter the following:
[rayy@barton0 code]$ rm xor.c.bakAgain, note the absense of any feedback from the rm command. I had to enter an ls command to verify that the xor.c.bak file had really been removed.
[rayy@barton0 code]$ ls xor.c*
As with other commands, rm can remove multiple files at once when used with wildcards or with the -r (recursive) option. See the man page for more information on rm.
Ok, this is really two commands, but they are complementary. Use the mkdir command to make a new directory and use the rmdir command to remove an empty directory. For example:
[rayy@barton0 tmp]$ mkdir testdirIn the preceding series of commands I first created a new directory named 'testdir'. I then used the ls command to verify its presence. Then, I removed 'testdir' and verified that it was gone by using ls again.
[rayy@barton0 tmp]$ ls
[rayy@barton0 tmp]$ rmdir testdir
[rayy@barton0 tmp]$ ls
For more information on the commands covered in this article, take a look at the general commands man pages over at the LinuxQuestions.org website.
|mail this link | permapage | score:9133 | -Ray, February 19, 2004 (Updated: April 18, 2007)|
Linux Tutorial: Rename a RAID Array
|I am moving a raid array called /dev/md0 from serverA to serverB. On serverB /dev/md0 is already in use. How do I rename a RAID array from /dev/md0 to /dev/md2?|
You can move a RAID array (software based RAID array) to another system. However, if /dev/md0 is already is use on serverB, you can rename /dev/md0 as /dev/m2. read more...
|permapage | score:9132 | -nixcraft, November 27, 2012|
Missing the point of the Mac Mini
|I've read several articles and numerous comments over the past week detailing just how overpriced Apple's new Mac Mini is. Reviewers seem to conclude that because they can assemble a PC of similar performance to the Mini for less money, that the new Mac simply costs too much.|
What they have not done, however, is duplicate the Mac Mini in any important way. The closest comparison I've seen pitted the Mini against a machine 2.5 times its size. At least that reviewer understood that size matters. I'm a fan of small systems. I own 2 Biostar iDEQ cubes, one Shuttle, and three Book PC's. The Book PC's are the oldest and most obsolete, of course, with the fastest one containing a Pentium III 667. I've gotten rid of several systems over the past two years that were faster than the Book PC's. Why keep the slower computers while getting rid of systems up to twice as fast, you might ask.
I've kept the Book PC's because they are so small that keeping them around isn't a burden. They take about as much space on a bookshelf as an unabridged dictionary. I have one, currently disconnected, functioning only as a monitor stand for the system I'm using right now. A book PC will fit in my briefcase; I've hauled them around with me as if they were laptops. With a dozen computers around the house, space is precious and small is beautiful.
The Book PC's are 3.2" x 10.5" x 11.9". That's 4.8 times the volume of a Mac Mini. The Mac is truly tiny. I've worked to build fast, small, quiet Linux systems for years now. The iDEQ 200V is the cheapest system I've made that is fast, quiet, and runs Linux without complaint. Without software and with only the on-board graphics chip, it cost about the same amount as the Mac Mini. At 12.5" x 7" x 8", however, it is much larger than the Mini and weighs several times as much.
I challenge the anti-Mini crowd to build a PC of any shape that displaces approximately the same volume as the Mini plus power supply. Then, compare prices again. The SFF computer fans are clearly going to notice this machine and are going to buy a few truckloads of them. In the small form factor (SFF) computer market, even ignoring the software, this machine is clearly a bargain.
SFF computer fans who are committed to Windows will still covet this system; a few of them might even make the switch to OS X just to get one. I even expect some SFF Linux geeks to buy them because they're tiny, cheap, and can run Linux. Conclusion: the anti-Mini reviewers and posters are not SFF people.
Next, the Mini is an affordable and typically stylish Mac. A smallish PC does not run OS X. The Mini comes with OS X and will make a great second (or third) computer for many Mac users. I use Linux as my primary desktop OS (SuSE 9.2 Professional for the last three weeks, Fedora Core 2 the previous year) and FreeBSD and Linux (Fedora, Slackware) for my servers. I'm hardly a Mac guy but, as a Unix geek, I'm perfectly fine with OS X. I used a Mac as my primary desktop for a couple of weeks after a recent move.
Many Mac users -- at least those who need a second system -- will find the price -- and the size -- of this system quite appealing. Clearly, the negative reviewers and posters are not OS X users.
Therefore, I've come to the conclusion that these anti-Mac Mini arguments are coming from people who appreciate neither of the core characteristics of the machine. They don't understand the appeal of the SFF systems market, nor are they OS X / Mac users.
Apple, on the other hand, appreciates both and they have produced an impressively priced small form factor OS X system.
I wish for Apple responsive suppliers with scalable production facilities. They will surely need them in order to satisfy the demand for the Mac Mini.
|mail this link | permapage | score:9131 | -Ray, January 21, 2005|
Tutorial: Introduction to Linux files
|This newbie-level Linux tutorial is an introduction to handling files from the Linux command line. It will cover finding files, determining their type, renaming, copying, examining their attributes, reading their contents, and, in the case of binary files, how to get clues to learn something more about them. Further reading will be suggested for editing files since that topic is beyond the scope of this article. |
The reader of this tutorial is expected to have access to a Linux system and to perform the example commands as we progress through the tutorial. Once logged in to your Linux system, open a terminal session. Under Red Hat Linux, terminal is found in the 'system tools' section of the menu. (Your system may, alternatively, use a terminal program called 'konsole', 'xterm', or 'shell'. Look around your system for a menu with 'tools' or 'utilities' in the name if necessary.)
ls: Listing files
Let's start with the ls command. ls is an abbreviation for list files. Type ls now, then press the 'enter' key to see the names of the files in your current directory. The results from my 'tmp' directory are listed in bold below:
$ ls /tmpNote that I said 'your current directory'. To get the a listing of files in another directory, enter ls [dir] where [dir] is the name of the directory you wish to look at. For example, to see the file names in your top level directory, '/', type the following:
$ ls /For more information on the files, use one or more of the ls command line switches. Here I use ls with the -l switch for a 'long' listing:
bin dev home mnt
proc sbin tmp var boot etc
initrd lib opt root sys usr
$ ls -lNote that with the -l switch we get the file permissions, the inode links, the owner and group names, the file size in bytes, and the timestamp of the file as well in addition to the name. The ls command has many more options. Type man ls for a full list of options.
-rw-r--r-- 1 root root 9649
Mar 28 02:47 tardir.0.log
file: What is this file?
Linux also provides a handy command to help determine what type of files you are dealing with:
$ file tardir.0.logThe Linux (and Unix) file command knows about, and can detect, many different file types. In our example, file tells us that tardir.0.log is a simple ASCII text file.
tardir.0.log: ASCII text
less: Paging through a file
Now, to actually look at the contents of a text file, we have many options. The most common is the more command and a more elaborate, newer command is less. I like less because it lets you use the arrow keys for scrolling and the pgup/pgdn keys for paging through the file. The following is a condensed page from the command less tardir.0.log:
home/tfr/From the ':' prompt we can page or scroll forward or backward. We can also type /star to search for the next occurrance of the string 'star'. Enter man more or man less for more information on the more or less commands, respectively.
[ . . . ]
mv: Renaming a file
Now, suppose we want to rename a file. Under Linux (and Unix) we 'move' it with the mv command as follows:
$ lsNote that the mv command only produces output when there is an error. In this case, we encountered no error so mv quietly performed its work.
$ mv tardir.0.log tar.log
cp: Copying files
To make an actual copy of a file, we use the cp command. For example, to make a backup copy of tar.log named tar.log.2, we enter the following:
$ cp tar.log tar.log.2Again, we get no output to the screen when the cp command is used without error. We had to use the ls command to see the result of the command. Enter man cp for more details of the cp command.
strings: Looking for text in a binary file
Now, to actually look inside an unknown binary file for text strings there is a command called, appropriately enough, strings. For example, if we run the strings command on the 'echo' program, we get, in part, the following:
$ strings /bin/echoType man strings for more information.
Copyright (C) 2004 Free
Software Foundation, Inc.
Written by %s, %s, %s,
%s, %s, %s, %s,
%s, %s, and others.
grep: Finding particular strings in a file
To look for a particular text string in a file, we use the grep command:
$ grep html tar.logAnd, of course, man grep will retrieve additional instructions for the grep command.
find: Finding files by name
To find all files with a particular name on your system, use the find command. For example, to find files named 'echo', enter the following:
$ find / -name 'echo'Further, to find all files in the /var filesystem with the string 'echo' in their names, enter this:
$ find /var -name '*echo*'
To get started editing text files try this tiny vi tutorial. After going through the quick tutorial, you can click the contents button and reach an advanced vi tutorial as well as other vi information.
For information on moving around in a Linux filesystem try this Introduction to Linux in ten commands. That article also provides additional examples on some of the commands covered here.
|mail this link | permapage | score:9125 | -Ray, April 2, 2005|
Bootable rescue tools
|Save your system with these bootable Linux rescue tools...|
As often as not these rescue disks will boot a version of Linux. For instance, the Kaspersky Labs rescue disk runs a version of Gentoo, Panda Security's SafeDisk is based on Debian GNU/Linux, and BitDefender and F-Secure are based on Knoppix; and these are not the only examples. read more...
|permapage | score:9114 | -Ray, February 17, 2011|
Linux Sysadmin interview: Format and tips
Recently I had to come up with a list of questions and a format for a sysadmin interview. From past experiences and talking to colleagues I found quite a few possible approaches and reasons, but there still are some common rules interviewers should keep in mind.
Use an email quiz to filter out the fakes and save heaps of time. We had a vacancy for a more senior role a while ago and received lots of good CVs. Barely the 2% of the applicants we sent the quiz to came back to us with decent answers, some wouldn't even finish it! Something easy is enough, we asked to describe a step-by-step backup restoration and to do some reformatting on a csv file.
Never scare them! Some people have an hard time during interviews but perform very well when on the job. Obviously you want to consider the emotional factor,especially for a sysadmin, which more than others might end up having to carry out critical tasks under pressure. But don't forget that the type of stress is different, and responding badly to one doesn't mean responding bad to the other, and vice-versa.
Avoid questions that can be answered with a RTFM! Otherwise it's cheaper to buy a parrot and read man-pages to it before bed time. As usual there are limits, some commands and options should be known. But try to prove they can use their brain rather than just remembering things. Possibly come up with problem-solving oriented questions, put together a bunch of different log files and ask them to reconstruct what happened; or ask them to explain you how they would deploy software to a large number of machines.
Leave questions as open as possible, you're moreinterested to understand their approach and attitude rather than in the actual answers (again, this is true to some extent, catch my drift). Invite them to think aloud, that generally works.
Find out if they are passionate about the subject, ask them about their distribution and why they use it (this is also a good way to spot zealots, avoid them as plague!). Ask them why they want to work as sysadmin and if they have particular reasons to apply for a position within your company.
If your company is mostly based on FLOSS, find out their knowledge and interest in free libre software, that is also a good indicator of passion.
Down to the questions (bear in mind the target was a junior admin):
- Test their knowledge of the "community", who's Linus Torvalds; Richard Stallman; Alan Cox; Eric Raymond, and maybe another few big names like Tanenbaum and Theo De Raadt.
- Do you run Linux at home?
- What distribution, why?
- What project are you most proud of?
- Why are you leaving your current job?
- Why do you want this job?
- Why should we hire you?
- What is a sysadmin?
This is highly optional, but I'm possibly looking for a geek, and any person applying for a sysadmin position should be able to answer at least the first 2 questions.
- Can you expand RTFM? BOFH?
- What's the last man page you've read?
- Do you read any webcomics?
Generic linux questions
- What are setuid/setgid in relation to file permissions?
- What are setuid/setgid in relation to directory permissions?
- What is an inode?
- What does init do? What does inetd do?
- What’s PGP/GPG; Public/Private Key cryptographic systems
- What’s ssh? Setting up trust between accounts.
- How does ssl work?
- What are different directories in / for?
- What to do , if the newly build kernel does not boot.?
- Do you know what source management is? have you used it? what software?
- How does the boot process (init levels) work on Linux
Things we need
Depending on your business there might be some piece of software you heavily use, hence have particular interest for the candidate's knowledge about it Have you used Apache? Talk to me about it. In my case we primarily do web based services so I ask about apache and mysql.
- What do you know about configuring and/or compiling Apache;
- Virtual Hosts?
- .htaccess files?
- mod_perl? mod_php?
- log files and log management
- Are you familiar with SQL?
- Do you know about indexes?
- Can you name and explain some of the tables in the mysql database
CliWe send an email quiz before the interview which includes a scripting test. You might want to setup one for the interview, generally parsing CSV files is a good one.
- What shell do you use?
- do you know any other?
- Name some basic shell command like cut and explain what they do
- Which editor? vim/emacs/something else
File systems and disks
- What is the big difference between ext2 and ext3?
- What about xfs?
- Partitions layout
- Different raid levels
- What’s a hub?
- Problems with hubs
- What’s a switch?
- What’s a router?
- What’s the difference between UDP and TCP?
- How would you find what ports are open on a machine (local and remote)?
- What’s the OSI model? What are the seven levels?
- How would you capture network traffic?
- What’s a VLAN?
- How does DNS work?
- How does FTP work?
- How does SMTP work?
- What's traceroute?
- Do you know what a chroot is?
- Do you know what a BoF is?
- Do you know what an sql injection is?
- Do you know what a DoS attack is?
- Do you know what a botnet is?
- What firewall applications have you used?
- Can you name the problems of firewalling ftp?
|mail this link | permapage | score:9111 | -Filippo Spike Morelli, May 21, 2007||