Friday, December 21, 2007

Two arms, two legs, and 140 bpm

Today Shana and I went to have an ultrasound done. The baby we're expecting is healthy and doing well. According to the measurements, we're probably a little ahead in our estimates for the due date. The baby might be a week older than we thought. Nevertheless, we're both extremely happy, thankful and blessed to have another healthy baby on the way.

I was pleasantly surprised when we got the dvd in our hands and it was an actual movie. And now, here it is for the world to see.

Oh, and what about the sex? You'll have to watch and see for yourself. And if you like, leave some naming suggestions in the comments. We're open minded, but yes, picky too :-)

Wednesday, November 14, 2007

Leopard for the Web Developer - Multiple Virtualhosts with SSL in Apache

The goal is simple but lofty -- configure Apache for multiple SSL virtualhosts in Mac OS X Leopard.

In practice, this gets a little complicated. Here are the basic set of steps to take:

  1. Configure domain name resolution for the development host names
  2. Configure distinct IP address aliases for those host names (critical for multiple SSL Virtualhosts)
  3. Create the self-signed SSL certificate(s)
  4. Enable the correct modules and configuration files for Apache.
However, before we begin, lets do a little bit of preparation. The first thing you'll want to do is determine which host names you're going to use. For example, let's say you have 2 development domains -- local.lo-fi.net and dev.lo-fi.net -- each of these will be a different development project, pointing to different resources on the file system. Once you know which host names you'll be configuring your environment for, you need to choose the distinct IP address to give them. For the record, distinct IP addresses aren't important if you don't plan on using SSL. However, if you do need SSL on these domains, a distinct IP address is critical. I don't know all the details, but, from experience I can tell you that an SSL certificate somehow binds to a single IP address. If you're a little lost, don't worry, more details about the SSL stuff follow. The point is, you need to decide which IP addresses you want to use for the host names.

For my purposes, I want my host names to work much like "localhost" works. I just want a host name to point to my local computer. Basically, here's what I want:

host: localhost = ip: 127.0.0.1
host: local.lo-fi.net = ip: 127.0.0.2
host: dev.lo-fi.net = ip: 127.0.0.3

Now that you've made this decision, you can move forward.

Configuring Domain Name Resolution for the Development Host Name

This is the easiest part, by far. Simply open /etc/hosts with your favorite text editor, and add a few lines. The following is what mine will look like.


##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost

#Development domains
127.0.0.2 local.lo-fi.net
127.0.0.3 dev.lo-fi.net

All of the "localhost" lines should already exist for you. The 2 lines that are relevant to you are the ones showing the desired ip addresses with the development domains. After you save this, if you try pinging "local.lo-fi.net", your computer will try to connect to the ip address "127.0.0.2" However, that IP address doesn't exist yet. That's next.

Configuring IP Aliases with ifconfig and launchd

Since I want to have my host names act like localhost does, I'm going to add aliases to the loopback network interface. You can do this in the terminal by typing:

sudo ifconfig lo0 alias 127.0.0.2 netmask 255.255.255.0
sudo ifconfig lo0 alias 127.0.0.3 netmask 255.255.255.0
Now, when you ping "local.lo-fi.net", your computer tries to contact 127.0.0.2, and you get a result. Great! This is exactly what we want -- well, kind of. When you log out or re-boot, this configuration is lost. What we really want is for this to happen at startup, so we don't have to re-configure ifconfig every time we start our computer.

Making this happen is one of the coolest things in this process -- at least, that's what I think. Before I get to the decided solution, I'll back up just a bit. In previous versions of Mac OS X, you could edit the /etc/iftab file, consisting of lines with the arguments you'd send to ifconfig (eg. lo0 alias 127.0.0.2 netmask 255.255.255.0), and this would get picked up and work when starting up. However, in Leopard iftab is gone. What's a developer to do? One solution I found leveraged Automator, but I figured out something much more elegant.

Launchd to the rescue!

Launchd is a daemon running in OS X that is responsible for lots of process management. It's designed to be a powerful replacement for other types of service management tools like inetd, rc, and even cron. What's neat is that you can create a plist configuration file pointing to the executable file you want to run, put it in a specific place, and the system executes it at the right time. What we're going to do is create a plist file for each of our network aliases. These plist files will actually execute ifconfig with the arguments we need, and do it as the privileged root user without any manual intervention.

We're going to create 2 plist files:

/Library/LaunchDaemons/net.lo-fi.local.ifconfig.plist
/Library/LaunchDaemons/net.lo-fi.dev.ifconfig.plist
OK, here's the content of one of these files (net.lo-fi.local.ifconfig):

<plist version="1.0">
<dict>
<key>Label</key>
<string>net.lo-fi.local.ifconfig</string>
<key>ProgramArguments</key>
<array>
<string>/sbin/ifconfig</string>
<string>lo0</string>
<string>alias</string>
<string>127.0.0.2</string>
<string>netmask</string>
<string>255.255.255.0</string>
</array>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>

I won't go into the nitty gritty about launchd plist files, but basically this file does 3 things.
  1. Declares a unique Label name (net.lo-fi.local.ifconfig)
  2. Declares the program to run (ifconfig), as well as the arguments that follow
  3. Tells launchd to run this at load time
Finally, these files need to be owned by root. So, if you haven't already, do this:
sudo chown root:wheel /Library/LaunchDaemons/net.lo-fi.local.ifconfig.plist
sudo chown root:wheel /Library/LaunchDaemons/net.lo-fi.dev.ifconfig.plist
Now, if you re-boot, these files will execute ifconfig with the arguments you need. It's so simple, it's beautiful.

Self Signed SSL Certificates

I have to admit, I followed the instructions for creating a self signed certificate here, with splendid results. If you're interested in the details about how creating a self signed certificate works, it's worth your time to read it. If you don't care, and just want something to work, you probably won't need more than what I show you below. However, here's the high level overview of what I'm about to do.
  1. Create a certificate for our own personal signing authority
  2. Create a certificate request for a domain
  3. Sign the certificate signing request, and generate a signed certificate
  4. Make a copy of the signed certificate that doesn't need a password when apache starts
I did this all in a "ssl" directory I created in /etc/apache2

sudo mkdir /etc/apache2/ssl
cd /etc/apache2/ssl


1. Generate your own Certificate Authority (CA). Make sure to remember the passphrase you're prompted for. This is what you use to sign certificates.

sudo openssl genrsa -des3 -out ca.key 4096
sudo openssl req -new -x509 -days 1825 -key ca.key -out ca.crt

2. Generate a server key and request for signing (csr). When prompted for the Common Name (CN), enter the domain name you want the certificate for. In my case, the Common Name would be "local.lo-fi.net"

sudo openssl genrsa -des3 -out local.lo-fi.net.key 4096
sudo openssl req -new -key local.lo-fi.net.key -out local.lo-fi.net.csr


3. Sign the certificate signing request with the self-created certificate authority that you made earlier

sudo openssl x509 -req -days 1825 -in local.lo-fi.net.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out local.lo-fi.net.cst

4. Make a key which doesn't cause apache to prompt for a password.

sudo openssl rsa -in local.lo-fi.net.key -out local.lo-fi.net.key.insecure
sudo mv local.lo-fi.net.key local.lo-fi.net.key.secure
sudo mv local.lo-fi.net.key.insecure local.lo-fi.net.key


Repeat steps 2 through 4 for each distinct domain you want to create a certificate for. Another thing that you can do, is name wildcards for your common name. For example, if I wanted a certificate that I could use an all subdomains of lo-fi.net, I can enter a Common Name of "*.lo-fi.net"

Just to review, here's what you should have in your directory

$ ls /etc/apache2/ssl
ca.crt
ca.key
dev.lo-fi.net.crt
dev.lo-fi.net.csr
dev.lo-fi.net.key
dev.lo-fi.net.key.secure
local.lo-fi.net.crt
local.lo-fi.net.csr
local.lo-fi.net.key
local.lo-fi.net.key.secure


Now, you have everything you need for setting up the virtualhost for your domains.

Here's what you've all been waiting for! Configuring multiple virtualhosts with SSL.

Here's the high level review of what we need to do:
  1. Configure virtualhosting
  2. Configure mod_ssl
  3. Add virtualhost configurations for our new hosts.
The first step to take is to tell Apache to include some files. The specific files we need to have included are the ones designed for virtualhost and ssl configuration. By default, these are not included. To include them, open /etc/apache2/httpd.conf, and go to the bottom of the file. Around lines 461 amd 473, you'll have the opportunity to un-comment the relevant include lines. Here's what it should look like once you're done.
# Virtual hosts
Include /private/etc/apache2/extra/httpd-vhosts.conf
and
# Secure (SSL/TLS) connections
Include /private/etc/apache2/extra/httpd-ssl.conf

Once that is done, you'll need to edit these files somewhat. httpd-vhosts.conf configures virtualhosts running on port 80, designed to apply name-based virtualhosting. I include it here, because I want to test sites on both port 80 and port 443 (https). Using name-based virtualhosting is actually pretty nice for domains when ssl isn't required. All you have to do is create a new <virtualhost> entry, with a "ServerName [whatever.com]" line, and you have a new virtualhost. However, for this file, I'm just going remove the dummy virtualhosts that are in this file as examples, and set a default. Here is what I like to use.

#
# Use name-based virtual hosting.
#
NameVirtualHost *:80

#
# VirtualHost example:
# Almost any Apache directive may go into a VirtualHost container.
# The first VirtualHost section is used for all requests that do not
# match a ServerName or ServerAlias in any <VirtualHost> block.
#
<VirtualHost _default_:80>
ServerAdmin eric@mac.com
DocumentRoot "/Library/WebServer/Documents"
ServerName localhost
ErrorLog /private/var/log/apache2/error_log
CustomLog /private/var/log/apache2/access_log common
</VirtualHost>

This configuration does 2 things, it names a wildcard virtualhost for port 80, and it defines a virtualhost that will be used for everything other than a name-based virtualhost that we might define later. That does it for the httpd-vhosts.conf file.

Next up is the mod_ssl configuration in the httpd-ssl.conf file. The modification for this should be pretty simple. This file does 2 things. First, it sets up the basic configuration for the ssl module. Second, it contains virtualhost configuration for a default virtualhost for port 443. The easiest way to deal with this file is to comment out the default virtualhost configuration (which starts around line 75). This configuration is designed to give you an ssl virtualhost for any host that isn't matched by another specific virtualhost. If this is something you'd like to do, all you have to do is make sure that you have a valid certificate and a key. Look for these lines (line 99 and 107):
SSLCertificateFile "/private/etc/apache2/server.crt"
and
SSLCertificateKeyFile "/private/etc/apache2/server.key"
If you want to enable this default ssl virtualhost, adjust them as necessary, such that they point to a certificate and key that you have created. OK, now save that file and you're good to go.

The final step is to create some virtualhost files for your domains. I create one per domain, and place them in /etc/apache2/other. Any file with a ".conf" extension is picked up by apache when it starts up. I like to name my files with the domain I'm configuring.
local.lo-fi.net.conf
dev.lo-fi.net.conf


Finally, we can configure the virtualhosts. Here is what mine look like:

local.lo-fi.net.conf


<VirtualHost *:80>
ServerName local.lo-fi.net
DocumentRoot /Users/eric/WebApps/local.lo-fi.net/webroot
ServerAdmin eric@guesswhere.net
<Directory "/Users/eric/WebApps/local.lo-fi.net/webroot">
AllowOverride All
Options
Order allow,deny
Allow from all
</Directory>
</VirtualHost>

#Note the loopback ip address we set up for this host
#local.lo-fi.net = 127.0.0.2
<VirtualHost 127.0.0.2:443>
ServerName local.lo-fi.net
DocumentRoot /Users/eric/WebApps/local.lo-fi.net/webroot
ServerAdmin eric@guesswhere.net
<Directory "/Users/eric/WebApps/local.lo-fi.net/webroot">
AllowOverride All
Options
Order allow,deny
Allow from all
</Directory>

# SSL Configuration
SSLEngine on
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP
SSLOptions +FakeBasicAuth +ExportCertData +StdEnvVars +StrictRequire

#Self Signed certificates
SSLCertificateFile /etc/apache2/ssl/local.lo-fi.net.crt
SSLCertificateKeyFile /etc/apache2/ssl/local.lo-fi.net.key
SSLCertificateChainFile /etc/apache2/ssl/ca.crt

#DON'T DO ANY INTENSIVE SSL OPERATIONS UNLESS THE FILE IS html OR php
<Files ~ "\.(html|php?)$">
SSLOptions +StdEnvVars
</Files>
SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0

</VirtualHost>


dev.lo-fi.net.conf


<VirtualHost *:80>
ServerName dev.lo-fi.net
DocumentRoot /Users/eric/WebApps/dev.lo-fi.net/webroot
ServerAdmin eric@guesswhere.net
<Directory "/Users/eric/WebApps/dev.lo-fi.net/webroot">
AllowOverride All
Options
Order allow,deny
Allow from all
</Directory>
</VirtualHost>

#Note the loopback ip address we set up for this host
#dev.lo-fi.net = 127.0.0.3
<VirtualHost 127.0.0.3:443>
ServerName dev.lo-fi.net
DocumentRoot /Users/eric/WebApps/dev.lo-fi.net/webroot
ServerAdmin eric@guesswhere.net
<Directory "/Users/eric/WebApps/dev.lo-fi.net/webroot">
AllowOverride All
Options
Order allow,deny
Allow from all
</Directory>

# SSL Configuration
SSLEngine on
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP
SSLOptions +FakeBasicAuth +ExportCertData +StdEnvVars +StrictRequire

#Self Signed certificates
SSLCertificateFile /etc/apache2/ssl/dev.lo-fi.net.crt
SSLCertificateKeyFile /etc/apache2/ssl/dev.lo-fi.net.key
SSLCertificateChainFile /etc/apache2/ssl/ca.crt

#DON'T DO ANY INTENSIVE SSL OPERATIONS UNLESS THE FILE IS html OR php
<Files ~ "\.(html|php?)$">
SSLOptions +StdEnvVars
</Files>
SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0

</VirtualHost>


And there you have it. SSL for multiple virtualhosts in Apache for Mac OS X Leopard. Once you restart Apache, you should be up and running, sending the correct Self Signed certificates to the browser.

Monday, October 29, 2007

Leopard for the Web Developer - installing mod_jk

Today I was pleased to get a few comments from a fellow programmer anxious to set up his Mac so Apache could connect with Tomcat with the help of mod_jk. He came looking for info, found none, and kindly came back later to reveal his findings. Here's billo's instructions about how to install mod_jk on Mac OS X Leopard (for an intel Mac).



He explains in some pretty good detail why it fails out of the tar-box, so I won't go into it. However, what I will do is give you a line-by-line for how to make it work. I had to fill in a couple gaps after finding out what the problem was. Here's what you need to do.



1. Download and unpack the source of mod_jk (I installed version 1.2.25)



2. Make your way into the source directory (tomcat-connectors-[version]-src/native)


$ cd tomcat-connectors-1.2.25-src/native


3. Edit the apache-2.0/Makefile.apxs.in file with billo's fix. This is the solution that fixes the build. What you need to do is replace these lines:


mod_jk.la:
$(APXS) -c -o $@ -Wc,"${APXSCFLAGS} ${JK_INCL}" "${JAVA_INCL}" "${APXSLDFLAGS}" mod_jk.c ${APACHE_OBJECTS}

with these:


mod_jk.la:
$(APXS) -c -o $@ -Wc,"${APXSCFLAGS} -arch x86_64 ${JK_INCL}" "${JAVA_INCL}" "${APXSLDFLAGS} -arch x86_64 " mod_jk.c ${APACHE_OBJECTS}

The tab at the beginning of the $(APXS) line is very important! Don't remove it.



4. While you're in the [src]/native directory, configure the build files


$ ./configure --with-apxs=/usr/sbin/apxs


5. Change directory into apache-2.0 and get ready to build.


$ cd apache-2.0


6. Make the module using apxs as your compiler


$ make -f Makefile.apxs


7. Install the module


$ sudo make install


From there, you're on your own getting it configured for Apache. But the documentation for configuring mod_jk is  abundant.

Leopard for the Web Developer - Restoration Stategy

One of the goals I have for installing Leopard is to start with a clean installation, so I can use it to restore if I should ever mess things up sufficiently, as to require starting over. My intention was to install a clean OS onto an external hard drive, then create a disk image of that prefect system so I can use it to restore from. I was inspired with this idea after reading this fantastic set of instructions for backing up a Mac, and reading the man pages of asr and hdiutil. With these feathers in my cap, I wrote up a VERY CRUDE shell script to do it all, being the nerd that I am. Here it is:


#!/bin/bash

# This is a script designed to build a restorable disk image
# from a specified volume

# set up the variables.
method=$1
diskImage=$2
theVolume=$3
volumeName=$4

# Help method
help(){
echo "This is the help"
}

# This is the create method
create(){
image=$1
volume=$2
volname=$3
tempImage=$image"_temp"
#if [$volname] ; then
#echo $volname
#else
# volname="Mac Restore"
#fi;
#1. use hdiutil to create the image - make it a read write version
echo "hdiutil create $tempImage -ov -format UDRW -nocrossdev -srcfolder $volume -volname $volname"
hdiutil create $tempImage -ov -format UDRW -nocrossdev -srcfolder $volume -volname $volname

#2. mount the image
echo "hdiutil attach $tempImage.dmg"
hdiutil attach $tempImage.dmg

#3. Clean it up manually
echo "rm -f /Volumes/$volname/var/db/BootCache.playlist"
rm -f /Volumes/$volname/var/db/BootCache.playlist
echo "rm -f /Volumes/$volname/var/db/volinfo.database"
rm -f /Volumes/$volname/var/db/volinfo.database
echo "rm -rf /Volumes/$volname/var/vm/swap*"
rm -rf /Volumes/$volname/var/vm/swap*

#4. unmount the volume
echo "hdiutil detach /Volumes/$volname"
hdiutil detach /Volumes/$volname

#5. Convert the image into a read only compressed version
echo "hdiutil convert -format UDZO $tempImage.dmg -o $image"
hdiutil convert -format UDZO $tempImage.dmg -o $image

#6. Delete the temp image
echo "rm $tempImage"
rm $tempImage

#7. use asr to build the checksums so I can use the image to do a restore
echo "asr -imagescan $image.dmg"
asr -imagescan $image.dmg
}

# Default method restores the image $1 to the volume $2
restore(){
image = $1
volume = $2

echo "asr restore --source $image --target $volume --erase"
}

case $method in
create)
create $2 $3 $4
;;

restore)
restore $2 $3
;;

help)
help
;;
esac

The only problem is that it didn't work for me. I'd run this, saving the image file to a location on another partition of my external harddrive, and it would fail every time. Thinking that I was a little too ambitious and amateur to build my own shell script to handle it, I thought I'd try Carbon Copy Cloner, which is a fabulous backup software. In various configurations, it too failed every time. As a last resort, I tried using the DiskUtility app that ships with OS X. Once again failure.

What's interesting is how each of these approaches fail. Each time, creating an image file failed when the file size approached 4 Gigabytes. That was my key. It turns out that the partition that I use to do my backups on the external harddrive is formatted with a FAT32 file system. This is unfortunately required because I need my wife's PC to access the drive as well. After doing a little research, I found that FAT32 has a file size limit of just under 4 Gigs. Drats!

Now, my strategy for creating a perfect restorable disk image must change. I can't save the image on the backup drive as I intended. Instead of creating a restorable image, which I just think would be super cool, I think I'm going to have to do something else. I'll probably just restore from one disk to the other, and figure out how to save a disk image later. I'm thinking that I might just save it to a server on the network. I just have to set up the server :-)

Instead of focusing on the image creation, I think I'll push forward with the set up and install.

Up next: Apache, MySQL and PHP customization in Lepoard.

Wednesday, October 24, 2007

Leopard for the Web Developer

I did it. I pre-ordered Mac OS X Leopard. I'm looking forward to getting it for lots of reasons, but primarily because I plan on setting up a system that's just super righteous for a web developer (mostly java and php related).

In the posts to come, I'll list the steps involved in installing and configuring various features. My ultimate goal is to set up a disk image that contains the righteous set-up, so that I can restore it to my hard drive when I need to. I'll do this by installing Leopard onto an external drive before I actually install it on my Mac. This will afford me the luxury of making a disk image that's totally clean and uncluttered of the personal stuff, and allow me to continue work as usual while I perfect the Leopard install.

Stay tuned! Among the things I plan on doing include:

  • Configuring Apache Virtualhosts to handle custom domains under development.
  • Setting up the hosts file for those domains
  • Running PHP4 and PHP5 on the same apache install - both as a module!
  • Install Tomcat 5.5 and Tomcat 6 - maybe to run as a service
  • Connect Apache web server to Tomcat installations with modjk
  • Create at least one self-signed SSL Certificate for testing Secure domains in Apache
  • Install Eclipse as well as various plugins
  • Install Maven
  • Install Subversion
  • Install MySQL
  • I haven't played with it yet, but it seems like a glaring omission to leave out a Ruby on Rails set up. I may do that if I get inspired.
  • Install virtualization software. I have Parallels 2 now, but I'm considering a move to VMWare's Fusion. It's really a matter of dedicating the dollars. An upgrade to Parallels 3 is almost the same as buying Fusion outright.
So there it is! I expect that I'll get rolling with the details this weekend.

Tuesday, October 16, 2007

Respect the Barista

This year, on the dates of Sept 26th through October 4th, I celebrated the Feast of Tabernacles. It was established by God to be observed as a "statute forever in your generations" (His words, not mine -- Leviticus 23:41). For those who are unaware, essentially, the Feast of Tabernacles is the last set of holy days of the year in the Bible. This particular feast looks forward to the establishment of God's Kingdom on Earth -- really, this is the fulfillment of God's whole plan, the meaning of life. If you're interested in learning more, check out this fantastic literature about God's Holy Days, and this one about the Fall Holy Days.

All this is to give you a little context. One of everyone's favorite scriptures involving the Feast of Tabernacles is Deuteronomy 14:26, which says:

"And you shall spend that money for whatever your heart desires: for oxen or sheep, for wine or similar drink, for whatever your heart desires; you shall eat there before the Lord your God, and you shall rejoice, you and your household."

Of course, that doesn't mean that I should go to Vegas and spend it all on sin. But it does say that I can spend the money I save all year for whatever my heart desires.

Well, this year, my heart desired a good caffeine buzz. We got the Rancilio Miss Silvia espresso machine and the Rancilio Rocky espresso grinder through Jellyfish. So far, so good. With this set up and some killer local coffee beans, my 'spro rivals the best coffee shops in town. Check it out!




That's right. 25 second shots. Mmmmmmmm, good!

Wednesday, July 25, 2007

Struts 2 Security Update

A couple of weeks ago, a remote exploit was demonstrated for applications using Struts 2.0.8 and below. It's a scary one. Like System.exit(0) scary. In some ways I can't believe that it got this far because it's such a simple one.

Anyway, if you're using Struts 2 below verson 2.0.9, or if you're using WebWork below version 2.0.4, do yourself a favor and UPDATE your jars.

Easy way: just update your xwork jar file (download the full lib here)
Better way: update to Struts2.0.9

Sunday, July 22, 2007

Social Networking is Candy

I have some opinions about the current trends I've been seeing with new and emerging web sites. It seems that every single day, tech crunch posts something about a new social networking site getting seven figure venture capitalist investment. It makes me shake my head. This is why.

The underlying goal of social networking sites is communication with other people. And that's cool. We human beings love communication. I believe it's one of the core human needs. If you think of the major leaps in technology, many of them relate to advancement in communication methods. And if not directly, indirectly. Many advances help us to communicate with others better. The same is true with websites.

I can't help but notice a couple challenges facing a new social networking business model though.

1. Maintenance.

Keeping up with them numerous accounts on various sites is challenging at best. Just as it becomes increasingly time consuming and burdensom when you have multiple phone numbers, or multiple email addresses, belonging to many social networking sites requires overhead. I believe that Facebook is right on with adding applications, though, because this will allow it's users to pool their needs to communicate in one place. The need to belong to another type of social networking site, which has a specific purpose, becomes less gravitational. The more a user has invested in a single site, with a single account, the more reason there is to stay at that one place. This means that the flood of social networking sites we see today with specific purposes may become overshadowed by Facebook, simply because users don't want to keep up with an account in too many places at once.

2. Longevity.

Remember Friendster? What about MySpace? Facebook has grown immensely in the past year, simply because its the next thing, and facilitates communication in a slightly different way. And yes, unlike another popular site, it helps that it actually looks OK. MySpace is certainly not dead, but it's not the buzz that it was a year ago. I can't help but notice how trendy these sites are. It doesn't take a rocket scientist to figure out that soon a new site will launch, with a new twist on networking, and will take over as the place to keep in touch with people. If I may make a prediction, I'll say that the next big site will do something amazing related to integrating our cellphones and portable devices (on and off line).

3. Utility.

Sites that have longevity have something in common. People DO something with them. They use them. There's an investment involved. Flickr has all your photos, Hotmail / Gmail has your email, Ebay is your source of income, Wikipedia is your source for information, and Google is your oracle. Does Myspace or Facebook _have_ your friends? Not really, that's just where you hang out virtually. What does it DO for you? The answer? These sites give you a place and a medium to communicate. But the nature of communication is ethereal. It exists, and then it's gone. It has an expiration date. With the exception of time, there's little to no investment from an account holder at purely social networking sites. There's just the convenience of being in the same virtual place at the same virtual time with your friends, and adding customizations to your profile. This is not to say that facilitating communication is not useful. What I'm saying is that the value of that communication on social networking sites is mostly trivial. How many MySpace pages have endless streams of "LOL" and "Dude, how's it goin'?" conversations?

4. Excitement.

One of the best parts of signing on with a social networking account, is the process of building your account, loading it with connections to friends. It's really exciting when it's new, because so much happens so fast. However, once all of your close friends are signed up, the pace of growth slows down, and the thrill is gone. This alone makes for a short term life span.

Now, I'm not saying that there's no place for a social networking business model. It works for a specific purpose. It facilitates communication. But, we communicate in so many various ways, that one place on the internet to do it all just isn't lasting. We love new and novel means of communicating too. It gets easy to cut our losses when we leave an account at one site for another site because any communication through these sites has already served it's purpose. It's only of archival purpose once it exists. Much like a new car, the value of a communicated idea decreases at a dramatic pace (the IDEA communicated, on the other hand, is of some value... sometimes)

Social networking is candy. Lots of fun, but of no real nutritional, lasting value. I have to add as a disclaimer to all of this that I don't do much on social networking sites. They bore me. I love to communicate with people who I know and whom I don't know, but if I want to communicate something important, I have SO many ways to do it that work better than a website.

What does this mean? I believe that social networking sites are a flash in the pan, a gold rush if you will. It's not lasting though. Just exciting. As someone who is investing the future of his career in the industry, I just hope that the hype doesn't get out of hand, like it did in our recent past.

Monday, June 04, 2007

Imperial Stout via Batch Sparge

My brother-in-law and I recently started brewing beer. Being in Fort Collins, amongst some top-class breweries, we thought we'd be in good company.

Robb's got extensive experience in the craft, having brewed scores of quality beers, meads and wines. On the other hand, I have done very little. After making 2 or 3 batches in the dorm kitchen about 10 years ago, I just gave up. I found it much easier to run down to the corner for a sixer after I became of the age to legally purchase. Nevertheless, I love making my own stuff, and having a discriminating palette, I've always wanted to get back into making beer. With summer nigh upon us, and plenty of "Yeah, dude. We gotta brew!" conversations through the past year, Robb and I finally jumped in head first making beer.

Our last batch was a pale ale using Crystal hops exclusively. It turned out to be pretty good. A little light in body, cloudy, and not quite bitter enough, but Mmmmm tasty. One of the best things about home brew is that you get a chance to make more if it's not quite right -- and if it's perfect for that matter too.

Yesterday, we set out to build an Imperial Stout. We used an award winning recipe from Robb's stash, with a full 18lbs of grain for a ~5 gallon batch (Yowza!). We decided to omit the 10 lbs. of raspberries it called for, as neither of us have the garden, nor the budget for such an extravagant adjunct.

We started at about 8 in the morning with some donuts, coffee and a wee bit of mead Robb happened to find hibernating in a forgotten Corny keg. The mashing went a little less than controlled. We started with water that was about 4 degrees outside of the envelope of enzymatic activity we wanted, and semi-frantically (only as frantic as home brew allows) tried to cool it down, as you would a hot bowl of oatmeal. Then we went in for some breakfast. When we got back out, it was way too cool, by about 10 degrees F. I hypothesize because it was sitting on the concrete. We decided to put it back on heat for a bit until the temp got to where it needed to be.

I was in charge of the re-heat. I stirred while the pot was over the flame, and watched the thermometer closely. It rose pretty slowly, until I gave it a good stir. And suddenly it was like 20 degrees too hot. Real nice. With any enzymes completely fried that would help us out with any starch conversion, we decided to check the status. Fortunately our starch converted. Whew!

After a discussion with the friendly owner of the local home brew shop, we decided to batch sparge our mash. I didn't quite understand the process at first, but it turns out to be pretty simple. Instead of letting the grain sit while spending an hour sprinkling hot water over it, you go through a process of draining the mash at full speed, stirring in the sparge water in a couple batches until you have your desired volume. The best instructions for batch sparging, we found at brew365. There's lots of other articles about the technique, but few make it seem as simple as it is.

After that, everything went as planned. We got the gravity reading we were hoping for too (with 18 lbs of grains we BETTER). It should be quite a stout stout.

Robb just sent me a movie of the fruits of our labor. She lives!






Friday, June 01, 2007

Action based frameworks and fine-grained component dependencies

Working in an action-based framework, such as Struts, there's the inevitable problem where you may want to render a page which has content that the Action hasn't prepared yet.

The prime example is in the case of bad validation. Let's say we're viewing a page which has dynamic content on it, such as a list of products in a store. If I submit a form on that page with a validation error, in default cases the execution code, which probably retrieved that list of products, doesn't fire, and I'm forwarded to a validation error result page. Same action, different result page, but I still probably want to have the list of products on the page. I just don't want to process the form

The problem is that action based frameworks make an action do many things. It's not just a form processor, or just a product retriever, or a User retriever. It's all of those things at the same time.

There are solutions out there, and this is mostly a brainstorm about which kinds work well, and which do not.

In Struts 2, there's a tag which can call another action. It's kind of like action chaining. For example, if I have an action named "productList" which retrieves a set of products, I can just insert that tag, and I'm good to go. The nice thing about this is that it allows for more fine grained actions, tailored to specific tasks. On the down side, more than one action is being fired per page view, which can be pretty expensive.

There's also the option of creating a custom jsp tag. This too can be pretty cool. Total customization is possible, and at a lower-level. On the down side, Custom tags can get a little out of hand, if they're made to do more than intended. It's also pretty far outside of the realm of the action framework, so they might not play well together.

JSF: I haven't done anything with JSF, but as I understand it, they'd work similar to the custom tags. Same down side, I think as well

Dependency injection: Another place I haven't roamed quite yet. Although, I'm sooooo on the brink of doing so. Allowing an action to be populated ala carte could be very beneficial, however, there's probably a little plumbing to do, on a per-action basis, to make those dependencies available to a page. Writing getters and setters for every action which *might* want to use a list of products could be a pain.

The answer? I don't know!

I'm pretty actively working in a Struts2 environment, so, I'm thinking that using the struts action tag might be the way to go. However, I don't know how I feel about tying an action to a specific task.

Thursday, May 10, 2007

A Hot Dog a Day Keeps the Coronary Bypass Surgeon Away


They get a bad rap for causing arteriosclerosis, but according to this sign I saw while filling up for gas, it works the other way around.

Forget the aerobic exercise if you're trying to keep your heart healthy, try a hot dog to clean out that gunky buildup. It's cheap too.

Ah, don't you love irony?

Thursday, May 03, 2007

Green marketing is not a heap of steaming compost.

A little more than a month ago, I sat down and watched 'An Inconvenient Truth'. I found lots of the arguments for the significance of global warming quite convincing. Not every point, mind you, but most points.

The night before last, I sat down to watch Glen Beck's special on the 'Climate of Fear', the main goal of which was to poke holes in the arguments of Al Gore's claims in 'An Inconvenient Truth'. Many of Glen's arguments too were quite convincing, even after putting the thick rhetoric through a filter.

No matter what side you may fall into regarding global warming and its effects, we have to admit that it's becoming very difficult to ignore environmental issues. Companies are figuring this out too. You'll find more and more companies these days devoting resources towards environmental consciousness. A friend of mine has seen companies move in this direction, and started an interactive environmental marketing and design firm to help businesses promote their attention to environmental responsibility.

Whether it's trendy to be enviro-conscious, or genuine, the fact that it's happening is real. Today Apple announced an environmental initiative. You'll see more businesses follow suit.

Thursday, April 19, 2007

Stinky Baby

Since I moved my family away from our home in Madison last summer, I haven't had a centralized media server for things like images and music. In fact, I work almost entirely from a laptop these days. The simplicity has it's advantages, but among the disadvantages is that all of the photos that Shana takes of Gideon get downloaded to her computer, where it takes an extra effort to dig out and browse through the image collection.

What this means to you? Probably not much -- but I mention it to give an excuse for why I don't post more pictures of my firstborn.

So, here they are. The shirt is one we bought at Murray's Cheese shop in New York City on a trip that Shana and I took while Gideon was still in Utero. I didn't think it would fit until he was well past his first year of life, such as it is marked. However, at 11 months and some days, it fits quite well. Perhaps that also serves to illustrate how well he has been growing.

Regardless, having a shirt declaring an adjective synonymous with his last name is funny. Especially since we all now know that it's about the cheese we adore and miss so much from Wisconsin.

In this shot, I think he resembles Shana quite a bit.



Here, does he resemble me? I'll let you be the judge ,'-]

Thursday, April 12, 2007

I know why a Struts 2 File Upload fails

Yesterday I began some work on using Struts2 to do a file upload. The way it works is pretty slick. At least when it works.

The problem was that when uploading files, I'd be surprised with null values. However, when pressing the back button and re-submitting the form, it would work fine. Odd behavior indeed, but I'm not alone. Other people have had the same problem uploading files with Struts 2.0.6.

I dug into it a little bit. To quite a deep level actually, and I found the bug. The good news is that it seems to be related specifically to Struts 2.0.6. There's a point in the Filter, which serves as the starting point of the whole framework, where the HttpServletRequest is supposed to be wrapped by the appropriate class for later data retrieval.

In the case of the mysterious broken file upload, the interceptor (FileUploadInterceptor) that handles populating the struts action with the values checks to see if the Request object is an instance of the MultiPartRequestWrapper. If it is, the interceptor attempts to pull out the multipart data; that is, the actual file bytes. If it's not, it does nothing. When the file uploads don't work it means that the Request has not been wrapped in the MultiPartRequestWrapper. Therefore the FileUploadFilter doesn't do anything, even though the data is actually there.

The reason why it doesn't get wrapped relates to the FilterDispatcher (org.apache.struts2.dispatcher package) and the prepareDispatcherAndWrapRequest method. In that method, there's a logic block which deals with the Dispatcher instance. When the instance is null, it sets the Dispatcher instance to one that will work. The peculiar thing about this block, is that the part of the method that does the request wrapping is also in that logic block. Therefore, the request doesn't get wrapped in the MultiPartRequestWrapper unless the Dispatcher instance is null. When the file uploads failed, this instance was NOT null, causing the Request to push through as the RequestFacade. The FileUploadInterceptor doesn't work with that data type.

The fix? Just put the code that does the wrapping outside of the logic block that deals with the dispatcher instance.

To the Struts team's credit, it looks like this bug has been fixed. The code in the repository after the 2.0.6 tag does the wrapping correctly.

Here's some code for those who care.

First, the code that comes with Struts 2.0.6

    protected HttpServletRequest prepareDispatcherAndWrapRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException {

Dispatcher du = Dispatcher.getInstance();

// Prepare and wrap the request if the cleanup filter hasn't already, cleanup filter should be
// configured first before struts2 dispatcher filter, hence when its cleanup filter's turn,
// static instance of Dispatcher should be null.
if (du == null) {

Dispatcher.setInstance(dispatcher);

// prepare the request no matter what - this ensures that the proper character encoding
// is used before invoking the mapper (see WW-9127)
dispatcher.prepare(request, response);

try {
// Wrap request first, just in case it is multipart/form-data
// parameters might not be accessible through before encoding (ww-1278)
request = dispatcher.wrapRequest(request, getServletContext());
} catch (IOException e) {
String message = "Could not wrap servlet request with MultipartRequestWrapper!";
LOG.error(message, e);
throw new ServletException(message, e);
}
}
else {
dispatcher = du;
}
return request;
}

Code that works:


    protected HttpServletRequest prepareDispatcherAndWrapRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException {

Dispatcher du = Dispatcher.getInstance();

// Prepare and wrap the request if the cleanup filter hasn't already, cleanup filter should be
// configured first before struts2 dispatcher filter, hence when its cleanup filter's turn,
// static instance of Dispatcher should be null.
if (du == null) {

Dispatcher.setInstance(dispatcher);

// prepare the request no matter what - this ensures that the proper character encoding
// is used before invoking the mapper (see WW-9127)
dispatcher.prepare(request, response);

}
else {
dispatcher = du;
}


//Note that this wrapping code is where it should be
try {
// Wrap request first, just in case it is multipart/form-data
// parameters might not be accessible through before encoding (ww-1278)
request = dispatcher.wrapRequest(request, getServletContext());
} catch (IOException e) {
String message = "Could not wrap servlet request with MultipartRequestWrapper!";
LOG.error(message, e);
throw new ServletException(message, e);
}
return request;
}

Friday, March 23, 2007

Imagination Dessert

Shana and I just finished a fabulous steak dinner. It was really simple. Just a couple of pan-fried Angus New York Strips with a buttery pan sauce, some quickly-sauteed green beans, and a light salad.

We didn't really have anything but some chocolate biscuits for dessert, so I thought it'd be fun to enjoy an imaginary dessert. We then volleyed ingredients and preparation techniques back and forth until we had something that was satisfying.

I was first.

Eric: Blackberries - the kind that grow in a garden, not the kind that have exoskeletons
Shana: Cream
Eric: Cinnamon
Shana: Butter
Eric: Cream Cheese
Shana: Flour
Eric: Honey - the darker wildflower kind
Shana: Some kind of Liqueur - Grand Marnier (Eric says port wine)
Eric: Mint leaves

The preparation was next. We thought we'd create a puff pastry with the flour and the butter. But a medium puff pastry. Somewhere between a real puff pastry and a pie crust. We don't want too much airy volume. We'd mix the cream cheese, cinnamon, liqueur and some honey together. Then, cutting the pastry in squares, we put the cream cheese mixture in a reasonable dollop in the center. Then place 3 berries on the pastry with the cream cheese. Finally, folding the pastry, corner to corner, we made triangular little pastries, sealing the edges with a fork.

Then we bake these delights until the pastry is done. In the mean time, make whipped cream with the cream and a little bit of honey.

When the pastries are ready, we put them on the plate, drop some whipped cream in the center, and place the mint sprigs in the whipped cream. The last thing we did was drizzle the plate and dessert with yet more honey.

Then we closed our eyes and enjoyed it. What a wonderful dessert it was

,'-]

The Irony about Blogging

I've noticed an interesting thing about blogging. A quandary and ironic twist. The more there is to write about, the less time there is to actually do the writing.

This serves as my first post in nearly a year. I can explain a little bit with the help of this photo.



The past 9 months have been quite eventful. Merely getting used to life with a baby boy wasn't enough, so my family and I picked up our things from Madison, WI and hauled it all to The Best City in the U.S. I started off working on a freelance gig for Madison's Most Reliable Directory of Social Events. Shortly after that I got a job working as a consultant for a PLM Firm in town. We bought our first home, and then I started a new job working for a stealth-mode startup in the job seeker and social networking space. I work from home, feverishly, so we can roll out our first revision.

But back to the photo. This is me, with my new Spongebob Squarepants lunch box given to me by my friend and boss. Spongebob has become quite a role model to me. If I could only discipline myself to enjoy life so thoroughly as he. Ahh. I'm standing at my desk, taking the shot with the iSight camera on my new MacBook Pro in my cold basement office. Yes, that's insulation on the ceiling behind me.

Children, new job working from home, new house, new computer -- it's all there.

I might also take a moment to say that I'm re-dedicating myself to this blog. Communication with the world is increasingly important to me. I'm working on a re-design too, which should be ready shortly. Keep posted!