Installing a Gemstone Seaside Server on Ubuntu 10.10By Ramon Leon - 30 October 2010 under Apache, Databases, Seaside, Smalltalk, Gemstone, Linux I'll assume you've already installed Apache and now want to install Gemstone behind it as a Seaside server. Let's install a few things that we're going to need later, just to get the dependencies out of the way. Login to your server/workstation as an admin user, someone who can sudo. sudo aptitude install bc zip build-essential apache2-threaded-dev ia32-libs Now let's setup the user we're going to run Gemstone under. sudo adduser glass Add him to the admin group so he can sudo. sudo usermod -a -G admin glass Login as this user. su glass cd Download Gemstone and install it. wget seaside.gemstone.com/scripts/installGemstone.sh chmod +x installGemstone.sh ./installGemstone.sh Download some init scripts so we can setup Gemstone as a service rather than manually starting it. wget onsmalltalk.com/downloads/gemstone_initd_scripts.tgz tar xf gemstone_initd_scripts.tgz Edit each of these scripts and change the line RUNASUSER=USER to RUNASUSER=glass and change the first line to #!/bin/bash instead of #/bin/sh as the Gemstone scripts need bash and Ubuntu changed the bin/sh link to point to dash instead of bash which won't work. Install the init scripts. There's a shorter way to write these, but it will fit better on the blog if I do each one separately. sudo mv gemstone_initd_scripts/gemstone /etc/init.d/ sudo mv gemstone_initd_scripts/gs_fastcgi /etc/init.d/ sudo mv gemstone_initd_scripts/netldi /etc/init.d/ chmod a+x /etc/init.d/gemstone chmod a+x /etc/init.d/gs_fastcgi chmod a+x /etc/init.d/netldi sudo chown root:root /etc/init.d/gemstone sudo chown root:root /etc/init.d/gs_fastcgi sudo chown root:root /etc/init.d/netldi sudo update-rc.d gemstone defaults sudo update-rc.d gs_fastcgi defaults sudo update-rc.d netldi defaults Start just the gemstone and netldi services. sudo /etc/init.d/gemstone start sudo /etc/init.d/netldi start Grab GemTools and fire it up. I'm installing on my local machine so I can just fire this up here; if you're installing on a remote server, refer to my previous post about setting up X11Forwarding and running GemTools on a remote host. wget seaside.gemstone.com/squeak/GemTools-1.0-beta.8-244x.app.zip unzip GemTools-1.0-beta.8-244x.app.zip GemTools-1.0-beta.8-244x.app/GemTools.sh Edit the connection to point at localhost and login to Gemstone and open Monticello; open the MetacelloRepository; load either ConfigurationOfSeaside28 or ConfigurationOfSeaside30. I'm still on 2.8 so that's what I'm loading. If you're going to load 3.0, you'll need to edit the gs_fastcgi script accordingly as it's built to startup 2.8. Just change the DAEMON line to runSeasideGems30 instead of runSeasideGems. Click the admin button on the gem launcher and check commit on almost out of memory option (just in case loading anything takes up too much temp space), then run ConfigurationOfSeaside28 load in the workspace. Once Seaside is loaded, we can continue and start up the Seaside gems. sudo /etc/init.d/gs_fastcgi start Next we need to setup Apache to be able to use FastCGI and enable a few modules we'll need and will need to first build the FastCGI module. wget www.fastcgi.com/dist/mod_fastcgi-current.tar.gz tar zxvf mod_fastcgi-current.tar.gz cd mod_fastcgi* cp Makefile.AP2 Makefile make top_dir=/usr/share/apache2 sudo make install top_dir=/usr/share/apache2 echo "LoadModule fastcgi_module /usr/lib/apache2/modules/mod_fastcgi.so" > fastcgi.load sudo mv fastcgi.load /etc/apache2/mods-available/ sudo a2enmod fastcgi expires proxy proxy_http proxy_balancer deflate rewrite And fix the host file so FastCGI doesn't wig out over the ip6 address you're not even using. sudo nano /etc/hosts Comment out ipv6 line like so. #::1 localhost ip6-localhost ip6-loopback Now create a configuration for the site. sudo nano /etc/apache2/sites-available/gemstone Using the below config and modifying where necessary. ServerAdmin your@someplace.com Listen 8081 Listen 8082 Listen 8083 FastCgiExternalServer /var/www1 -host localhost:9001 -pass-header Authorization FastCgiExternalServer /var/www2 -host localhost:9002 -pass-header Authorization FastCgiExternalServer /var/www3 -host localhost:9003 -pass-header Authorization <VirtualHost *:80> ServerName yourComputerName RewriteEngine On DocumentRoot /var/www/ #http expiration ExpiresActive on ExpiresByType text/css A864000 ExpiresByType text/javascript A864000 ExpiresByType application/x-javascript A864000 ExpiresByType image/gif A864000 ExpiresByType image/jpeg A864000 ExpiresByType image/png A864000 FileETag none # http compression DeflateCompressionLevel 9 SetOutputFilter DEFLATE AddOutputFilterByType DEFLATE text/html text/plain text/xml application/xml$ BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # Let apache serve any static files NOW RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} -f RewriteRule (.*) %{DOCUMENT_ROOT}$1 [L] <Proxy *> AddDefaultCharset off Order allow,deny Allow from all </Proxy> ProxyPreserveHost On #main app ProxyPass / balancer://gemfarm/ ProxyPassReverse / balancer://gemfarm/ <Proxy balancer://gemfarm> Order allow,deny Allow from all BalancerMember localhost:8081 BalancerMember localhost:8082 BalancerMember localhost:8083 </Proxy> </VirtualHost> <VirtualHost *:8081> DocumentRoot /var/www1 </VirtualHost> <VirtualHost *:8082> DocumentRoot /var/www2 </VirtualHost> <VirtualHost *:8083> DocumentRoot /var/www3 </VirtualHost> Make a few symbolic links for those www directories, FastCGI seems to want these to all be different and Apache will complain if they don't actually exist. sudo ln -s /var/www /var/www1 sudo ln -s /var/www /var/www2 sudo ln -s /var/www /var/www3 And enable the new site and restart Apache. sudo a2ensite gemstone sudo /etc/init.d/apache2 restart Hopefully you've gotten no errors at this point and you can navigate to yourMachineName/seaside/config and see that everything is working. Gemstone is now installed as a service, as is netldi and the Seaside FastCGI gems, and they'll start up automatically when the machine starts. I'm not thrilled with running the Seaside gems this way because if they die nothing will restart them. I'll be following up later with a post on running the Seaside gems and maintenance gem under Monit which will ensure they're restarted should a gem crash for any reason. Gemstone itself and netldi I'm not worried about and this approach should work fine. Since I did this on my workstation which already had apache installed as well as other things I run, I may have missed a dependency or two that I already had installed and didn't notice. If the above procedure doesn't work for you for any reason, please let me know what I overlooked. 6 comments Faster Remote GemstoneBy Ramon Leon - 23 October 2010 under Databases, Linux, Performance, Smalltalk, Gemstone Just a quick post to document some knowledge for myself and for anyone using Gemstone on a remote server like SliceHost or my preference Linode and trying to run GemTools locally through a ssh tunnel. It's slow, very slow, several seconds per mouse click. OmniBrowser is just to chatty. Fortunately Linux has a better way to do it: X11Forwarding. Run the GemTools client on the remote server and forward the UI for just that app to your workstation. Now, if you have a mostly Windows background like I do, this might be something new to you, it certainly was to me. I'd kind of heard of it, but didn't realize what it was until today after I got it working. Just one more frakking cool thing Linux can do, much nicer than VNC/Remote Desktop because it means you don't have to install any window manager and the other hundred dependencies that go with it on the server. Every piece of software installed on a remote server is a piece of software that needs updated and/or could be hacked or make the next upgrade not go smoothly, so the less stuff installed on a server the better as far as I'm concerned. I happen to be running the latest 64bit Ubuntu 10.4 LTS on a Linode server, so if you're running something else the steps might be slightly different. To prep the server, which I'm assuming is a headless server managed via ssh, you'll only need to install two packages. One to enable the X11 forwarding and one to install a library that the Squeak VM needs for its UI that's not installed by default on a headless server. sudo aptitude install xauth libgl1-mesa-dev ia32-libs You'll also need to enable X11Forwarding in /etc/ssh/sshd_config by ensuring this line exists. X11Forwarding yes Restart sshd if you had to change this because it wasn't enabled. sudo /etc/init.d/ssh restart Now just upload the GemTools one click image and unzip it. scp GemTools-1.0-beta.8-244x.app.zip glass@serverName: ssh glass@serverName unzip GemTools-1.0-beta.8-244x.app.zip And everything is ready to go. Now ssh in again but this time with forwarding and compression enabled. ssh -X -C glass@serverName Now any graphical program started on the server from this session, will run on the server, but its UI will display as a window on the client as if it were running directly on the client. Now fire up GemTools on the server... cd GemTools-1.0-beta.8-244x.app ./GemTools.sh And GemTools will start up and it'll appear to run locally, but it's actually running remotely which means OmniBrowser can be as chatty as it likes, it's all runnning from localhost from its point of view. The X display, which is built to do this much better, is running on your machine. Now GemTools will run fast enough that you could actually develop directly in Gemstone if you like. Not that I actually would, Pharo has much better tool support. I think this will be the first of a run of posts about Gemstone, there's a lot to learn when switching dialects. I can tell you this, well tested code ports easier, so apparently I've got a lot of tests to write that I probably should have written from the start. Oh well, live and learn. 5 comments A Simple Thread Pool for SmalltalkBy Ramon Leon - 28 July 2010 under Smalltalk, Seaside, Squeak, Performance, Ajax, Programming Forking a thread in Smalltalk is easy, wrap something in a block and call fork. It's so easy that you can easily become fork happy and get yourself into trouble by launching too may processes. About 6 months ago, my excessive background forking in a Seaside web app finally starting hurting; I'd have images that seemed to lock up for no reason using 100% CPU and they'd get killed by monitoring processes causing lost sessions. There was a reason; the process scheduler in Squeak/Pharo just isn't built to handle a crazy amount of threads and everything will slow to a crawl if you launch too many. I had a search result page in Seaside that launched about 10 background threads for every page render and then the page would poll for the results of those computations, collect up any results found, and AJAX them into the page. Each one needs to run in its own thread because any one of them may hang up and take upwards of 30 seconds to finish its work even though the average time would be under a second. I don't want all the results being stalled waiting for the one slow result, so it made sense to have each on its own thread. This worked for quite a while with nothing but simple forking, but eventually, the load rose to the point that I needed a thread pool so I could limit the number of threads actually doing the work to a reasonable amount. So, let's write a thread pool. First we'll need a unit of work to put on the thread, similar to a block or a future. Something we can return right away when an item is queued that can be checked for a result or used as a future result. We'll start by declaring a worker class with a few instance variables I know I'll need. A block for the actual work to be done, an expiration time to know if the work still needs done, a value cache to avoid doing the work more than once, a lock to block a calling thread treating the worker as a future value, and an error in case of failure to store the exception to be re-thrown on the main thread.
I'll also want a few constructors for creating them, one that just takes a block, and one that takes a block and an expiration time. For my app, if I don't have results within a certain amount of time, I just don't care anymore, so I'd rather have the work item expire and skip the work.
On the instance side let's initialize the instance and setup the necessary accessors for the constructors above.
Now, since this is for use in a thread pool, I'll want a non-blocking method of forcing evaluation of the work so the thread worker isn't blocked. So if the work hasn't expired, evaluate the block and store any errors, then signal the Semaphore so any waiting clients are unblocked.
I'll also want a possibly blocking value method for retrieving the results of the work. If you call this right away, then it'll act like a future and block the caller until the queue has had time to process it using the evaluate method above.
But if you want to poll for a result, we'll need a method to see if the work has been done yet. We can do this by checking the state of the Semaphore; the worker has a value only after the Semaphore has been signaled.
That's all we need for the worker. Now we need a queue to make use of it. So we'll declare the class with some necessary instance variables and initialize them to some reasonable defaults along with some accessors to adjust the pool sizes. Now, since a thread pool is generally, by nature, something you only want one of (there are always exceptions, but I prefer simplicity) then we'll just rely on Smalltalk itself to ensure only one pool by making all of the pool methods class methods and the ThreadPool the only instance. I'll use a shared queue to handle the details of locking to ensure the workers share the pool of work safely.
Once you have a pool, you need to manage how many threads are actually in it and have it adjust to adapt to the workload. There are two main questions we need to ask ourselves to do this: are there enough threads or are there too many threads given the current workload. Let's answer those questions.
We also need a method for a worker to grab a queued work item and work it, and we don't ever want this to error out killing a worker thread since the worker thread should trap any error and re-throw them to the queuing thread. But just to be safe, we'll wrap it.
Now that workers have something to do, we'll need to be able to start and stop worker threads in order to increase or decrease the working thread count. Once a worker is started, we'll want it to simply work forever and the shared queue will handle blocking the workers when there's no work to do. We'll also want the worker threads running in the background so they aren't taking priority over foreground work like serving HTTP requests.
To kill a worker, we'll just queue a job to kill the active process, which will be whatever worker picks up the job. This is a simple way to ensure we don't kill a worker that is doing something important. This requires actually using the queue, so a quick couple methods to actually queue a job and some extensions on BlockClosure/BlockContext to make using the queue as simple as forking.
And now we're able to queue a job to kill a thread, making sure to double check at time of actual execution that the pool is still too big and the thread still needs to die.
Of course, something has to decide when to increase the size of the queue and when to decrease it, and it needs to a method to do so.
We need to ensure the thread pool is always up and running, and that something is managing it, so we'll hook the system startUp routine and kick off the minimum number of workers and start a single manager process to continually adjust the pool size to match the workload.
And cleanup everything on shutdown so every time the image starts up we're starting from a clean slate.
And that's it, a simple thread pool using a shared queue to do all the dirty work of dealing with concurrency. I now queue excessively without suffering the punishment entailed by forking excessively. Now rather than...
I just do...
Or in Seaside...
And my app is running like a champ again, no more hanging images due to forking like a drunken sailor. UPDATE: For the source, see the ThreadPool package on SqueakSource. 10 comments Dynamic Web Development with Seaside PDF Available for PurchaseBy Ramon Leon - 1 February 2010 under Seaside, Smalltalk, Updates, Ajax, Apache Reposted from Lukas Renggli blog: At the end of the payment process (PayPal) you will be redirected to the download area where you are able to get the latest builds of the PDF version of the book. If you bookmark the page you will be able to download fixes and extra chapters as we integrate them into the online version. By buying the PDF version you support our hard work on the book. We wish to thank the European Smalltalk User Group, inceptive.be, Cincom Smalltalk and GemStone Smalltalk for generously sponsoring this book. We are looking for additional sponsors. If you are interested, please contact us. If you are a publisher and interested in publishing this material, please let us know. So please, support the Seaside community and buy the book, I know I will. 4 comments SandstoneDb GOODS adaptorBy Ramon Leon - 14 May 2009 under Smalltalk, Seaside, Databases SandstoneDb was written mostly as a rails'ish API for a simple object database for use in small office and prototype applications (plus I needed a db for this blog). Which object database wasn't really important to me at the time, it was the API that I wanted, so I made the actual object store backing it pluggable and initially wrote two different store adaptors for it. The first was a memory store which was little more than a dictionary of dictionaries against which I wrote all the unit tests. The second was a prevayler style file based store that used SmartRefStream serialization and loaded everything from disk on startup; this provided a crash proof Squeak images which wouldn't lose data. I figured eventually, for fun I might get around to writing adaptors for some of the other object database back-ends that are in use: GOODS and Omnibase. I never really got around to it; however, Nico Schwarz has written a GOODS adaptor for SandstoneDb. This will let you hook up multiple squeak images to a single store and should scale better than the file store that SandstoneDb defaults to. Go check it out and let him know what you think of it. This is just the kind of project that'll help programmers new to Seaside get going and get accustomed to using an object database rather than a relational one. It looks like his first blog post as well, so swing by and leave a comment to encourage more posts, we need more bloggers spreading the word! 3 comments << 1 2 3 4 5 6 7 8 9 10 >> | Topics
|