Tuesday, July 31, 2007

Cloning a MAC address in IPCOP

IPCop is a great SOHO linux firewall distribution that is custom built using the Linux From Scratch (LFS) toolkit. I have been using it for many years first toying with the 1.3.x versions and finally deploying the 1.4.x versions to my home network and in productional enterprise environments. I have been very impressed that given minimal hardware (200MHz PII, 128MB SDRAM, and 10GB HDD), I have been able to obtain darn close to wirespeed routing with NAT translation for 100Mbps connections (actual speeds were able to hit 12MBps sustained). With built in DHCP server, support for up to four networks (public (RED), private (GREEN), DMZ (ORANGE), and a seperate LAN (BLUE often used for wireless networks), there are really a lot of options available.

My biggest gripe has been with their lack of supporting MAC address cloning. As many of you that have cable networks know, cable companies often use poor man's security by adding MAC address restrictions to their DHCP servers. In order to get an IP address that is routable to the the Internet, you need to have the MAC address with which you originally registered with the cable company. Many of us are ditching our Linksys boxes in order to use a better featured and more powerful firewalling solution and therefore our only option is to either clone our MAC or attempt to suffer through the tech support hotline music only to get a rep that has no idea what a MAC address is. Don't you hate it when you the customer have to tell the company's representative how to do their job?

Well, according to many of the blogs and posts out there, thats your only option with IPCop. I am here to say "Not any more." We just need to change the MAC address on the public interface BEFORE IPCop attempts to make a DHCP request. I figured out a real clean way to do this before after spending a few hours in the code base but have long since forgotten where I documented what I did. This time I had less time and had to find a faster way even if it was less elegant.

First log into the box (either on the console or through SSH) and login as root (you remember your root password right)? Edit the following file (vi and nano are installed by default):

nano /etc/rc.d/rc.red

This is merely a perlscript file that will get executed as the last part of the rc.netaddress.up process. Scroll past the first few lines. You will see a boilerplate header, some includes, some variable definitions, and finally, look for the section that says:

# read vars back from file

In my version (1.4.13), there are 4 lines after that line. Add a carriage return to get a new line and add the following line after all those &General:readhash lines:

system ('/sbin/ifconfig', 'eth2', 'down', 'hw', 'ether', '00:12:ef:34:2a:ee');

Replace eth2 with the name of your WAN (RED) interface and, of course, use the MAC address which you want the IPCop box to clone. Save the file, reboot, and enjoy your night free of long monotonous elevator music.

Sunday, July 01, 2007

Designing applications for the enterprise and the home

While working for Purdue University as the Technical Lead for Application Development, one of the topics I stressed the most was designing applications to be Reusable, Extensible, and Scalable. While many people agreed that applications should be designed in a modular fashion, few agreed that even the smallest utility should employ such a design.
Why should I spend an extra hour writing a bunch of classes chock full of properties, methods, and constructors when a simple class with a bunch of "functions" would work just fine?
This is a very valid question. The underlying reason many people even ask this question is because they are thinking small. They don't realize that the functionality or effort that they are placing in their utility could be utilized by someone else for another purpose. This all boils down to coupling. Coupling is the term that expresses how dependent a given module is on another module.

For example, I have noticed that many PHP applications are written in a tightly coupled fashion. In a given method or function, the developer is performing data validation and sanitation, manipulating inputs and performing calculations, and finally is executing data backend transactions. That is highly coupled code. If you wanted to reuse that data cleansing and validation functionality, your only option would be to copy it into your own method. Ok, thats no big deal. Now imagine that the backend database changed and the cleansing requirements change. Now the nightmares begin. In how many different areas was this cleansing code implemented. Worst off, how many different times was it written. Developer B did not like Developer A's variable names so completely renamed all of the variables and changed some of the logic design. In a loosely coupled design, the data validation and cleansing functionality would be abstracted into its own set of classes under some Data Utility namespace. Now requirement changes are a simple line edits in a single location away.

Now rethinking the previous example. Suppose the developer had written the application in a loosely coupled fashion however all this code is stored under the developers local hard drive. How does this help the situation. All code that is written should be stored in a central versioning repository. In addition, an entry should be made into a Wiki or other central publishing store so that other developers can find this code easily by category, namespace, or search. Personally, I really like the MSDN style of documentation. In fact, there are many utilities that can automatically generate this documentation so all you need to do is write up a small blurb about the class, method, etc and some skeleton code as how it should be used.

So what are the disadvantages of designing applications with loose coupling in mind. Well one is performance. I will not try to disagree with the procedural type developers and argue that loosely coupled code is faster. With proper design techniques and resource management you can minimize performance issues. Despite this, the cost of hardware these days is small but the cost of labor is quite high. In addition, having loosely coupled code also allows you to be a lot more nimble as design changes can be easily implemented.

Another difficulty encountered with loosely coupled code is that of deployment. Versioning of all levels of components is CRITICAL! A proper versioning pattern should be in place to ensure that every release (whether beta, alpha, production, release, internal, etc) is properly versioned and its features, bugs, known issues, and API specification is well known. In addition, with loosely coupled code, you want to make sure that your resulting libraries maintain API compatibility through the major version number. That means that if you have a typical versioning scheme of Major.Minor.Revision.Build, a dll with version 1.2.5.6 should be binary compatible with 1.9.12.35. That means, you should be able to reference any 1.x.x.x dll in your application and it should work (minus any bug fixes) without changing a single line of code.

One addition critical element required for loosely coupled design is refactoring. Refactor is the name given to the process of changing the code design without changing the functionality or results of the code. I love the word balance and try to implement it always in my life. I also keep in mind balance when designing applications. Sometimes, I will tolerate a certain degree of coupling because the costs of trying to abstract the code further outweigh the benefits. Since we live in a dynamic world, sometimes the benefits will start to outweigh the costs. As soon as this scale starts to tip, its EXTREMELY important to refactor. Many companies don't encourage refactoring because it does not make them any money. Your not fixing bugs and not introducing new functionality. Despite this, the long term cost advantage is hard to deny.

So whether your a small single developer or a large enterprise, loosely couple code along with proper versioning and refactoring can be a way to dramatically reduce development time and cut costs while increasing quality and delivering greater value.

Yes it is possible to have your cake and eat it too.