I have covered a wide range skills in Linux system administration and web development using open source software since 1999. I also have a BTEC HNC in Commercial Data Processing gained in 1990. Here is a list of some of the topics I have covered since 1999.
I have been designing, building and upgrading my own web servers in my spare time since late 2000. This site used to live on a server I built, administer and maintain. At the time it was running on SuSE Linux - then Fedora Core. I have now moved all of my websites to cloud hosting, which enables me to take my server offline without the live websites going down.
The server I built and run at home is now used for development and testing purposes and is currently running Centos 5.9 .
I first started to use Linux in October 1998 with SuSE 5.2 cover CD-ROM from PC Plus magazine. SuSE Linux 5.2 review
"UK and European Linux fans have a reason to rejoice this month, with the release on the PC Plus Magazine's October 1998 cover CD-ROM of a complete distribution of SuSE Linux 5.2, complete with GIMP 1.0, KDE 1.0 and much much more on a bootable CD-ROM."
Confident with installing Apache from source code. This includes configuring the source code and compiling in required modules. I administer Apache on my development and testing server running Centos 5.9 at home.
This covers quite a wide area, as follows:
- Installation and configuration of the database server, and client programs that communicate with the server.
- Creating users and administering their access permissions to the database.
- Database design using table normalisation techniques.
- Writing optimised code that will interact with the database, to add records, update records, remove records, or get records from the database, for use in database-driven web applications.
- Checking database tables for integrity, and safely backing up the database.
- Monitoring the performance of the database server, and implementing any optimisation techniques necessary, via configuration files.
Unlike procedural programming that uses function calls to seperate reusable code, OOP uses discrete 'objects' that contain the data and the methods to process that data. The data inside each object is only accessed externally via the object's interface.
An object's interface is the collection of methods that can be called from outside the object. It is bad practice to manipulate the object's data directly by bypassing the interface. This is one of the main principles of OOP - code to an objects interface, not the code implemented behind the interface.
That way, if the implementation of the code inside the object is updated, the object still provides a consistent interface to the outside world - so external code that uses the object does not have to be changed.
This is how Polymorphism is implemented. By creating objects with the same interface but with different functionality inside, this allows one to modify running code dynamically by switching the active object for a particular function, such as reading or writing various file formats.
I found this book very helpful in explaining the terms used in Object Oriented Programming: OOP Demystified by James Keogh & Mario Giannini.
Lyx is a program used to produce nice DVI and PDF documents.
"LyX is a document processor that encourages an approach to writing based on the structure of your documents (WYSIWYM) and not simply their appearance (WYSIWYG). LyX combines the power and flexibility of TeX/LaTeX with the ease of use of a graphical interface."
On screen, LyX looks like any word processor; its printed output - or richly cross-referenced PDF, just as readily produced - looks like nothing else.
The shell is a command interpreter. More than just the insulating layer between the operating system kernel and the user, it's also a fairly powerful programming language. A shell program, called a script, is an easy-to-use tool for building applications by "gluing together" system calls, tools, utilities, and compiled binaries. Virtually the entire repertoire of UNIX commands, utilities, and tools is available for invocation by a shell script.
If that were not enough, internal shell commands, such as testing and loop constructs, lend additional power and flexibility to scripts. Shell scripts are especially well suited for administrative system tasks and other routine repetitive tasks not requiring the bells and whistles of a full-blown tightly structured programming language.
UML is a standardized, general-purpose modeling language in the field of software engineering. The UML includes a set of graphic notation techniques to create visual models of object-oriented software-intensive systems.
UML combines techniques from data modeling, business modeling, object modeling, and component modeling. It can be used with all processes throughout the software development life cycle, and across different implementation technologies. UML offers a standard way to visualize a system's architectural blueprints.
The ICONIX Process is a minimalist, streamlined approach that focuses on the area that lies in between use cases and code. It's emphasis is on what needs to happen at the point in the life cycle where you're starting out: you have a start on some use cases - and now you need to do good analysis and design.
"PHing Is Not GNU make; it's a PHP project build system or build tool based on Apache Ant. You can do anything with it that you could do with a traditional build system like GNU make, and its use of simple XML build files and extensible PHP "task" classes make it an easy-to-use and highly flexible build framework."
PHING is designed for the automated builds of medium to large scale PHP software projects. For example a team developers might be working on their assigned PHP source code modules for a project, and then checking in their modifications regularly to a Version Control System. PHING could be set to run automatically every hour and do a complete project build of the whole system.
I use PHING as the build tool for my PHP5 modular website framework. You can see some PHING scripts here: Example PHING XML scripts
You can read more about the PHING build system here: PHING homepage
phpDocumentor 2 is a tool to generate documentation from your PHP source code, and works with either procedural or OOP code. With this documentation you can provide more information regarding the functionality embedded within your PHP source code - and not only what is usable to them from your user interface. Documentation generated by phpDocumentor 2 does not aim to be a replacement for conventional user-guide documentation, but is rather supplemental reference documentation.
PHP is mainly focused on server-side scripting, and can be embeded into web pages using the Apache PHP module, (or Fast CGI on closed-source web servers). You can do anything any other CGI program can do, such as collect form data, generate dynamic page content, or send and receive cookies. But PHP can do alot more than that too.
PHP scripts can also be run from the command line using the stand-alone PHP executable. With PHP there is not much that you cannot do on the server - if the OS supports it then PHP should be able to do that as well. I like to use PHP command-line scripts that call the bash command interpreter. That way I have all the ease and flexibility of programming scripts in PHP, plus the ability to use any OS command that can be run from the bash command-line shell. Please see my PHP backup scripts I posted on the Fedora forums for an example: Generic PHP CL backup script
PHP currently runs on more than 200 MILLION web sites.
"SQLite implements a self-contained, serverless, zero-configuration, transactional embedded SQL database engine. The code for SQLite is in the public domain and is thus free for use for any purpose, commercial or private. SQLite is currently found in more applications than we can count, including several high-profile projects."
Unlike most other SQL databases, SQLite does not have a separate server process. SQLite reads and writes directly to ordinary disk files. A complete SQL database with multiple tables, indices, triggers, and views, is contained in a single disk file. The database file format is cross-platform - you can freely copy a database between 32-bit and 64-bit systems or between big-endian and little-endian architectures. SQLite table columns are typeless - ie you can store multiple data types in each column.
This is the basic language that web pages are built from.
XHTML is HTML reformulated as XML.
CSS rules allow a website designer to set the visual style of page elements (the way an element 'looks'), and it's position on screen, for all elements of all pages in a website.
For each part of a web page, values such as background color, background image, border size, border style, border color, font style, font size, margin dimensions, padding size, positioning, tables and their elements, and text color, can all be set by using CSS rules in one style sheet. This makes altering the visual style or position of page elements really easy.
Any changes to a style sheet rule will be reflected for all elements that use that rule, across all pages of the website.
You can also change the visual style properties of a web page's component parts in response to a user's interaction with the browser. This is often referred to as Dynamic HTML.
When a browser loads a document, the browser builds a representation of the document's structure in memory, called the DOM (Document Object Model). The browser then uses the DOM in computer memory to display the document in the browser window.
Any part of a page can be altered or removed, or hidden from view and made visible again.
The presentation style of any page element can be modified dynamically. For example, changing the border color of a form element into which a user has entered invalid data.
Packet filtering (aka firewalling) is most commonly used as a first line of defense against attacks from machines outside your LAN. Packet filtering allows you to explicitly restrict or allow packets by machine, port, or machine and port. For instance, you can restrict all packets destined for port 80 (WWW) on all machines on your LAN, except for the machine that is designated as your web server.
Most modern routing devices have built-in filtering capabilities, and packet filtering has become a common method of security for machines connected to the internet. Although packet filtering is very flexible and powerful, by no means does it guarantee the security of your LAN, but it does go a long way toward protecting it.
When the browser receives the data requested from the server, The browser can then use DHTML to update the current page with the new data received from the server. This is usually alot quicker than reloading a complete new page from the server.
TMDA is an open source software application designed to significantly reduce the amount of spam (Internet junk-mail) you receive.
The technical countermeasures used by TMDA to thwart spam include:
- whitelists: - accept mail from known, trusted senders.
- blacklists: - refuse mail from undesired senders.
- challenge/response: - allows unknown senders which aren't on the whitelist or blacklist the chance to confirm that their message is legitimate (non-spam).
- tagged addresses: - special-purpose e-mail addresses such as time-dependent addresses, or addresses which only accept certain kinds of communication. These increase the transparency of TMDA for unknown senders by allowing them to safely circumvent the challenge/response system.
This combination was chosen based on the following assumptions about the current state of spam on the Internet:
- You cannot keep your email address secret from spammers.
- Content-based filters can't distinguish spam from legitimate mail with sufficient accuracy.
- spam will not cease until it becomes prohibitively expensive for spammers to operate.
To maintain economies of scale, bulk-mailing is generally:
- An impersonal process where the recipient is not distinguished.
- A one-way communication channel (from spammer to victim).
The RPM Package Manager (RPM) is a powerful command line driven package management system capable of installing, uninstalling, verifying, querying, and updating computer software packages. Each software package consists of an archive of files along with information about the package like its version, a description, and the like.
RPM is a core component of many Linux distributions, such as Red Hat Enterprise Linux, the Fedora Project, SUSE Linux Enterprise, openSUSE, CentOS, Meego, Mageia and many others. It is also used on many other operating systems as well, and the RPM format is part of the Linux Standard Base.
I have rebuilt a nice process manager from Fedora 6 SRPM to run on Centos 5.6 . It is called qps and the source code files are here:
Before you can rebuild or create your own RPM packages under Centos, you need to set up a dedicated user for this purpose, with their own build environment. This is to ensure the Linux system does not get trashed. This article on the Centos wiki covers this well:
Here is a comprehensive guide on using RPM for Linux package management, and how to build and create your own packages using RPM: