blog.mhartl | Michael Hartl's tech blog

2010-12-1

The Ruby on Rails Tutorial book & screencasts

Filed under: Uncategorized — mhartl @ 13:16

In October, I launched a product I’ve been working on for more than a year—the Ruby on Rails Tutorial book and screencast series—and yet somehow I neglected to make an announcement on my own tech blog! This post is an attempt to remedy that lamentable situation. In celebration of the launch, you can use the coupon code

blogmhartl

on the shopping cart page to get a 10% discount on any combination of Ruby on Rails Tutorial products through the end of the year.

The entire text of the Ruby on Rails Tutorial book is available for free online, and yet many people have found it worthwhile to purchase the PDF. I especially recommend the Rails Tutorial PDF/screencast bundle, which includes more than 500 pages of material and more than 15 hours of video. If you need further convincing, you can read the foreword by Derek Sivers, the praise section, or the recent review in Dr. Dobb’s.

Note: The blogmhartl coupon code for the Ruby on Rails Tutorial expires at midnight on January 1, 2011. Get it while it’s hot!

Comments (5)

2010-07-28

Deploying to Heroku with Rails 3.0.0.rc

Filed under: Ruby on Rails — mhartl @ 16:34

UPDATE: Heroku now works with the latest Bundler, so this post is (or should be) obsolete.

The Ruby on Rails Tutorial book uses the latest version of Rails, which is the current release candidate of 3.0.0.rc. Unfortunately, at the time of this writing, you can’t deploy applications to Heroku using the Rails release candidate because of a conflict with the latest version of Bundler. Instead of a successful deploy, you get an error like this:

$ git push heroku
Counting objects: 4, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 1018 bytes, done.
Total 3 (delta 1), reused 0 (delta 0)

-----> Heroku receiving push
-----> Rails app detected
-----> Detected Rails is not set to serve static_assets
       Installing rails3_serve_static_assets... done
-----> Gemfile detected, running Bundler
       Unresolved dependencies detected; Installing...
       Your Gemfile.lock was generated by Bundler 0.10.
       You must delete it if you wish to use Bundler 0.9.
       FAILED: Have you updated to use a 0.9 Gemfile?
       docs.heroku.com/gems#gem-bundler

error: hooks/pre-receive exited with error code 1

Eventually, Heroku support for the Rails release candidate (or perhaps for the final release) will no doubt be ready, but for now you can work around this problem as follows:

$ [sudo] gem uninstall bundler
$ [sudo] gem install bundler -v 0.9.26
$ rm -f Gemfile.lock

Then use this as your Gemfile, which reverts back to Rails 3.0.0.beta4:

source 'rubygems.org'

gem 'rails', '3.0.0.beta4'
gem 'sqlite3-ruby', '1.2.5', :require => 'sqlite3'

Then install the gems:

$ bundle install

At this point, the push to Heroku should work. (Of course, that doesn’t mean it will. :-)

$ git add .
$ git commit -m "Ready to deploy"
$ git push heroku master
Comments (20)

2010-02-19

Some rvm gotchas

Filed under: Uncategorized — mhartl @ 12:59

Like many in Rails-land, I started playing with the Rails 3 beta as soon as it came out. And, like many, I discovered to my chagrin that (a) Rails 3 doesn’t play nice with Rails 2.3 and (b) it doesn’t uninstall cleanly. (I ended up manually removing all 3.0.0.beta gems by hand.) If you ask around a bit, you’ll quickly discover a solution to this problem (if you’re running a Mac), which is to run multiple Ruby versions and then install different versions of Rails on different Rubies. This works great in principle, but I ran into a few gotchas in the actual implementation, so I thought I’d share them for the benefit of Google searchers everywhere.

In order to manage the feat of multiple Rubies, the amazing Ruby Version Manager (rvm) by Wayne Seguin is simply essential. Here are the steps I followed:

After installing rvm, I first tried to install the latest version of Ruby 1.8.7, which works with both Rails 2 and Rails 3:

$ rvm install 1.8.7

At this point I got a readline error, but this was covered by the rvm troubleshooting guide:

$ rvm remove 1.8.7; rvm install readline

Unfortunately, the installation still fails to build a makefile:

$ rvm install 1.8.7
.
.
.
FAIL
$ less ~/.rvm/log/ruby-1.8.7-p249/make.error.log

[2010-02-12 08:54:30] make
make: *** No targets specified and no makefile found.  Stop.

After searching around for a while, I finally found a page (location already forgotten) with the fix, which involves including an explicit path to the readline dependency:

$ rvm remove 1.8.7
$ rvm install 1.8.7 -C --with-readline-dir=/Users/mhartl/.rvm/usr

This still didn’t work, though; on my system (OS X 10.5 Leopard) I got a segfault, which shows up when installing certain gems:

$ gem install metric_fu

/Users/mhartl/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/
spec_fetcher.rb:245: [BUG] Segmentation fault
ruby 1.8.7 (2010-01-10 patchlevel 249) [i686-darwin9.8.0]

Abort trap

Luckily, I have a working Ruby 1.8.7 in /usr/local/bin, so I could check the patchlevel: p174. Since rvm installs the latest version by default (1.8.7-p249 as of this writing), I had to indicate the patchlevel explicitly:

$ rvm remove 1.8.7
$ rvm install 1.8.7-p174 -C --with-readline-dir=/Users/mhartl/.rvm/usr

(Note that this also requires the explicit readline path.)

Finally, I had a working Ruby, but I wanted to set the rvm Ruby as the default, and my system wouldn’t remember it:

$ rvm 1.8.7-p174 --default
$ which ruby
/Users/mhartl/.rvm/rubies/ruby-1.8.7-p174/bin/ruby
new shell...
$ which ruby
/usr/local/bin/ruby

The solution was to remove the explicit /usr/local/bin path from my .bash_profile file:

before:
  export PATH="/usr/local/bin:$HOME/bin:/usr/local/mysql/bin:$PATH"
  if [[ -s /Users/mhartl/.rvm/scripts/rvm ]] ; then source /Users/mhartl/.rvm/scripts/rvm ; fi

after:
  export PATH="$HOME/bin:/usr/local/mysql/bin:$PATH"
  if [[ -s /Users/mhartl/.rvm/scripts/rvm ]] ; then source /Users/mhartl/.rvm/scripts/rvm ; fi

Wayne Seguin assures me that this step should be unnecessary as long as the PATH appears before the rvm line, but this is the only way I could get it to work on my system. (Since /usr/local/bin is on the system path, all the executables there still run; I’m not sure why I ever included it in .bash_profile in the first place.)

Finally, we’re ready for the raison d’être of all this work: multiple Rubies, multiple Rails. Here’s how to get it working (note the lack of sudo in the gem installations):

$ rvm use 1.8.7
$ gem install rails -v 2.3.5
$ rvm install 1.9.2
$ rvm use 1.9.2
$ gem install rails --pre

Voilà! Now switching between Rails versions is as easy as

$ rvm use 1.8.7

and

$ rvm use 1.9.2
Comments (13)

2009-05-15

Running rcov with RSpec

Filed under: RSpec, Ruby, Ruby on Rails — mhartl @ 16:01

I recently wanted to run rcov, the Ruby code coverage tool, on a project tested with RSpec. I think I’d done it once before, but I’d forgotten how. After searching to no avail (both the rcov home page and the RSpec page on rcov proved unhelpful), I applied the tried-and-true Wild-Assed Guess™ method and typed

$ rake spec:rcov

That worked.

The reports themselves are in the coverage/ directory:

$ open coverage/index.html

(or navigate your browser to file:///path/to/project/coverage/index.html). If you’re using Git, add coverage/* to your .gitignore file.

N.B. The inverse Rake task also exists:

$ rake -T rcov
rake spec:clobber_rcov  # Remove rcov products for rcov
rake spec:rcov          # Run all specs in spec directory with RCov (excluding plugin specs)
Comments (5)

2009-04-28

New RSS feed

Filed under: Uncategorized — mhartl @ 08:22

This blog’s RSS feed has changed; please re-subscribe here: feeds2.feedburner.com/mhartl. (It might take an hour or two to go live. If it doesn’t work for you now, come back in a bit and try again.)

Leave a Comment

2008-10-28

Using a temporary branch when doing Git merges

Filed under: Git, Insoshi — mhartl @ 15:54

Merging branches in Git is wonderfully easy compared to many other version control tools, but sometimes merging causes problems you’d rather undo. One common merge side effect is the creation of code conflicts, and sometimes a merge causes so many conflicts that you end up regretting doing the merge in the first place. In addition, for projects with many contributors (such as Insoshi), sometimes you aren’t sure if you will even want to use the contribution on the branch you’re merging in. Unfortunately, merges are very difficult to undo, so if you just do a direct merge of, say, a contributor branch into your main development branch, you’re stuck if you decide you don’t want the changes after all:

# Don't do this!
$ git checkout master
$ git merge contributor_branch

The solution is always to use a temporary branch when doing any merge whose changes you’re not sure you’ll want to keep; if the merge proves intractable due to conflicts, or you just don’t want to use the contribution, then you can simply delete the temp branch. Here’s how it works:

$ git checkout master
$ git checkout -b temp_branch
$ git merge contributor_branch

Then you can do stuff like

$ git status
$ git diff master
<resolve conflicts, polish contributed code>

If the new branch passes muster, you can then merge it in:

$ git checkout master
$ git merge temp_branch
$ git branch -D temp_branch

(Note here that I’ve deleted the temp branch in the final step, just to clean up.) If, on the other hand, you decide not to continue with the merge, you can just delete the temp branch without merging it in:

$ git checkout master
$ git branch -D temp_branch

Either way, the lesson is the same: consistently using temp branches when doing dangerous merges is a great way to avoid the agony of merge remorse.

Comments (1)

2008-10-14

Setting up your Git repositories for open source projects at GitHub

Filed under: Git, Insoshi — long @ 12:08

[This is a guest post from Long Nguyen. —mhartl]

Like a lot of projects in the Ruby on Rails world, the Insoshi social networking platform uses Git and GitHub to manage its open source development and contributions. In setting up the repositories for Insoshi, I’ve applied the version control experience I gained at Discover, where I was technical lead for the software configuration management (SCM) team. Since some aspects of our setup aren’t obvious if you haven’t managed large projects before, we at Insoshi decided to share the details so that other GitHub projects might benefit as well.

We’ll start by reviewing the typical Git workflow based on pull requests, then discuss some problems you might run into with a “typical” repository setup, and finally explain the details of preparing the Insoshi Git repository for collaboration.

Why Pull Requests?

Git was originally developed by Linus Torvalds to host the Linux kernel, and pull requests are the de-facto standard for submitting contributions in Git because that’s what Linus does. (He talked about this in his Google Tech Talk on Git.) The concept of the pull request is straightforward: You notify someone that you’ve made an update via email, messaging on GitHub, etc. and let them know where to find it. They can then pull in your changes and merge it with their work.

Except for that interaction, everyone works within their own repository and on their own schedule. There’s no process waiting to be completed that blocks you from moving on to whatever you need/want to do next. And you’re not forcing anyone to drop what they’re doing to right now to handle your request.

It’s all very polite. And it works well in the context of distributed development since you avoid all kinds of coordination issues.

What’s needed?

If you want to contribute to an open source project, here’s really all that you need:

  1. A publicly accessible repository where your changes can be found
  2. A local repository for your development

Even if you’re new to Git, these both seem like pretty straightforward things to do—especially if you’re using GitHub for the public repository: your repository is just a fork of the main project repository.

Let’s set up our repository by going to the official Insoshi repository and clicking on the fork button:

spacer

I’ll need make note of the public clone URL for the official repository and my private clone URL for my newly created fork:

  • Official Insoshi public clone URL
    git://github.com/insoshi/insoshi.git
  • My fork’s private clone URL
    git@github.com:long/insoshi.git

Your local repository: The “obvious” thing to do

At this point, I’ll be tempted to go ahead and make a local clone of my fork:

$ git clone git@github.com:long/insoshi.git

and immediately get to work.

Technically, there’s nothing wrong with that. And as an individual developer starting a new project, it’s what you do, but there are several disadvantages to this seemingly straightforward approach. One of the major benefits of a distributed version control system like Git is that each repository is on an equal footing; in particular, we would like every fork to have the same master branch, so that if the “official” Insoshi repository should ever be lost there would be plenty of redundant backups. We also want it to be easy for each developer to pull in changes from the official repository; the “obvious” approach isn’t set up for that. Finally, it’s a bad idea in general to work on the master branch; experienced Git users typically work on separate development branches and then merge those branches into master when they’re done.

What we’d like is a way to connect up the local repository in a way that will

  • Keep the repositories in sync so that each contains the full “official” repository
  • Allow developers to pull in official updates
  • Encourage working on branches other than master

In the “obvious” configuration, I’m not set up to do any of that:

  • There’s no local connection to the official repository for updates
  • There’s no mechanism in place to push official updates to my fork on GitHub
  • We’re working directly on the master branch

Your local repository: The “right” way

Keeping the big picture in mind, here are the commands I’ve run to set up my local repository (using the GitHub id long):

$ git clone git://github.com/insoshi/insoshi.git
$ cd insoshi
$ git branch --track edge origin/edge
$ git branch long edge
$ git checkout long
$ git remote add long git@github.com:long/insoshi.git
$ git fetch long
$ git push long long:refs/heads/long
$ git config branch.long.remote long
$ git config branch.long.merge refs/heads/long

Let’s take a detailed look at what these steps accomplish.

So what does it all mean?

Step one

Create a local clone of the Insoshi repository:

$ git clone git://github.com/insoshi/insoshi.git

You should note that the Git URL for the clone references the official Insoshi repository and not the URL of my own fork (i.e., the clone URL is git://github.com/insoshi/insoshi.git instead of git@github.com:long/insoshi.git). This way, the official repository is the default remote (aka ‘origin’), and the local master branch tracks the official master.

Step two

I have to change into the repository to perform additional git setup:

$ cd insoshi
Step three

Insoshi also has an ‘edge’ branch for changes that we want to make public but may require a bit more polishing before we’d consider them production-ready (in the past this has included migrating to Rails 2.1 and Sphinx/Ultrasphinx).  Our typical development lifecycle looks something like

development -> edge -> master

I want to create a local tracking branch for it:

$ git branch --track edge origin/edge
Steps four and five

As I mentioned before, I’m resisting the temptation to immediately start working on the local ‘master’ and ‘edge’ branches. I want to keep those in sync with the official Insoshi repository.

I’ll keep my changes separate by creating a new branch ‘long’ that’s based off edge and checking it out:

$ git branch long edge
$ git checkout long

By the way, you can actually combine the two commands if you like, using just the ‘git checkout’ command with the -b flag:

$ git checkout -b long edge

You can name this branch anything that you want, but I’ve chosen my GitHub id so that it’s easy to identify.

I’m starting my changes off of ‘edge’ since that contains all the latest updates and any contribution I submit a pull request for will be merged first into the official Insoshi ‘edge’ branch to allow for public testing before it’s merged into the ‘master’.

Steps six and seven

I’m finally adding the remote reference to my fork on GitHub:

$ git remote add long git@github.com:long/insoshi.git

I’ve used my GitHub id once again, this time as the remote nickname.

We should run a fetch immediately in order to sync up the local repository with the fork:

$ git fetch long
Step eight

I’m pushing up my new local branch up to my fork. Since it’ll be a new branch on the remote end, I need to fully specify the remote refspec:

$ git push long long:refs/heads/long
Steps nine and ten

Now that the new branch is up on my fork, I want to set the branch configuration to track it:

$ git config branch.long.remote long
$ git config branch.long.merge refs/heads/long

Setting the remote lets me just simply use

$ git push

to push changes on my development branch up to my fork

Setting the merge configuration is mainly for completeness at this point. But if you end up working on more than one machine (work/home, desktop/laptop, etc.), it’ll allow you to just use

$ git pull

to grab the changes you’ve pushed up to your fork.

Isn’t that a lot of extra work to do?

This may seem like a lot work up front, but it’s all configuration work that you’d eventually do anyway. If you’re really that concerned about the extra typing, I’ve got a shell script for you.

The extra work is worth the effort, because with this configuration

  • My changes will be easily identifiable in my named branch
  • I can easily get updates from the main Insoshi repository
  • Any updates I’ve pulled into master and edge are automatically pushed up to my fork on GitHub

The last one is a bonus because the default refspec for remotes is refs/heads/*:refs/heads/*. This means that the simple ‘git push’ command will push up changes for all local branches that have a matching branch on the remote. And if I make it a point to pull in updates to my local master and edge but not work directly on them, my fork will match up with the official repository.

So what is the benefit of all this to open source projects like Insoshi?

  • The easier it is for the contributor to pull in updates, the more likely it will be that the pull request will be for code that merges easily with the latest releases (with few conflicts)
  • You can tell if someone is pulling updates by looking at their master and edge branches and seeing if they match up with the latest branches on the main repository
  • By getting contributors in the habit of working on branches, you’re going to get better organized code contributions

Basically, the less effort that’s required to bring in code via a pull request, the sooner it can be added to the project release. And at the end of the day, that’s really what it’s all about.

Putting (pushing and pulling) it all together

Now that we’ve covered all the details, let’s go through the full set of steps needed to make a contribution to a project like Insoshi:

  1. Fork the Insoshi repository on GitHub:

    spacer

  2. Follow the Git steps above or use the shell script to set up your local repository
  3. Checkout the local branch, just to be sure:
    $ git checkout long
  4. Make some changes (and remember your development branch is against ‘edge’) and commit them:
    [make changes in a text editor]
    $ git commit -m "My great contribution"
    $ git push
  5. Go to your fork and branch at GitHub (I’m at long/insoshi @ long) and click on the pull request button:

    spacer

  6. Tell us about what you just did and make sure “insoshi” is a recipient:spacer
  7. Bask in the glory of being an open-source contributor!
Comments (30)

2008-09-26

Using Rails to serve different content to humans and robots

Filed under: Insoshi, Ruby on Rails — mhartl @ 11:38

This post answers the question, How do you use Rails to do one thing for robots, and another thing for humans?

Why would you want to do this? In our case, the Insoshi home page forwards to a portal page that uses frames in order to have an interface that unifies the sites on the insoshi.com domain with those off-site, such as our GitHub repository and bug tracker. The frames page is horribly search-unfriendly, though, so we serve bots the actual content of insoshi.com/home/index (our routes map / to /home/index). (Note: The front of the portal page as seen by a human is the same as the index page served to bots; be careful about doing anything else, since bots can punish you if you use this technique for anything slimy.)

Our method is to use a before filter in the Home controller. Here’s the code (minus some irrelevant bits):

class HomeController < ApplicationController
  before_filter :forward_nonbots_to_portal, :only => "index"
  
  .
  .
  .

  private
  
    # Return true if the user agent is a bot.
    def robot?
      bot = /(Baidu|bot|Google|SiteUptime|Slurp|WordPress|ZIBB|ZyBorg)/i
      request.user_agent =~ bot
    end
    
    # Allow an explicit override of the forward_nonbots_to_portal.
    def no_redirect?
      params[:redirect] == 'false' or RAILS_ENV['ENV'] != 'production'
    end
    
    def forward_nonbots_to_portal
      redirect_to "portal.insoshi.com" unless robot? or no_redirect?
    end
end

The key here is the robot? method, which has a regex with a list of the most common bots user agents:

    # Return true if the user agent is a bot.
    def robot?
      bot = /(Baidu|bot|Google|SiteUptime|Slurp|WordPress|ZIBB|ZyBorg)/i
      request.user_agent =~ bot
    end

If the user isn’t a bot, we redirect them to the portal.

N.B. The second boolean, no_redirect?, prevents the forwarding in development mode and also allows us to override the redirect by passing a redirect=false parameter. This latter condition allows us to link directly to the home page, without a redirect, by using insoshi.com/?redirect=false. In particular, the portal menu home link itself uses this URL, because otherwise clicking on the home link repeatedly would cause a bunch of nested portal pages to appear.

Comments (2)

2008-09-21

Finding and fixing mass assignment problems in Rails applications

Filed under: Insoshi, mass assignment, Ruby on Rails — mhartl @ 18:14

Last week I received an email from Eric Chapweske (of Slantwise Design and the Rail Spikes blog) alerting me to mass assignment vulnerabilities in the Insoshi social network sourcecode. (See my post on mass assignment for a quick review of the concept, and don’t miss Eric’s mass assignment article for a more thorough treatment.) I quickly set to work fixing the problems, and within a few hours of receiving the email I’d pushed out a patched version to the Insoshi GitHub repository. Since the process was so instructive, and since mass assignment vulnerabilities are so common, I thought I’d share some of the details of what it took to fix them.

Fixing the models and controllers

The first step in solving mass assignment problems is to find them, so I whipped up a little find_mass_assignment plugin to make it easier:

$ script/plugin install git://github.com/mhartl/find_mass_assignment.git

(You’ll need Git and Rails 2.1 or later for this to work.)

This defines a Rake task to find mass assignment vulnerabilities. (It works by searching through the controllers for likely mass assignment and then looking in the models to see if they don’t define attr_accessible.) Let’s run it on the buggy Insoshi code and see what we get:

$ rake find_mass_assignment

/path/to/app/controllers/activities_controller.rb
    46      @activity = Activity.new(params[:event])
    68        if @activity.update_attributes(params[:event])

/path/to/app/controllers/comments_controller.rb
    20      @comment = parent.comments.new(params[:comment].

/path/to/app/controllers/messages_controller.rb
    50      @message = Message.new(:parent_id    => original_message.id,
    61      @message = Message.new(params[:message].merge(:sender => current_person,

/path/to/app/controllers/photos_controller.rb
    39      @photo = Photo.new(params[:photo].merge(person_data))
    61        if @photo.update_attributes(:primary => true)

/path/to/app/controllers/posts_controller.rb
    59        if @post.update_attributes(params[:post])
    157          post = @topic.posts.new(params[:post].merge(:person => current_person))
    159          post = @blog.posts.new(params[:post])

/path/to/app/controllers/topics_controller.rb
    28      @topic = @forum.topics.new(params[:topic].merge(:person => current_person))
    44        if @topic.update_attributes(params[:topic])

Yikes! That’s a lot of problems. How do we squash all these bugs?

One of the vulnerable models is the Post model, which is the base class for the ForumPost and BlogPost models. We’ll use the ForumPost model as our example. First we disable attr_accessible in the Post model, since we want to force all the derived classes to redefine it:

app/models/post.rb

class Post < ActiveRecord::Base
  include ActivityLogger
  has_many :activities, :foreign_key => "item_id", :dependent => :destroy
  attr_accessible nil
end

Then we set attr_accessible in the ForumPost model to allow only the post body to be set by mass assignment:

app/models/forum_post.rb

class ForumPost < Post
  .
  .
  .

  attr_accessible :body
  
  belongs_to :topic,  :counter_cache => true
  belongs_to :person, :counter_cache => true
  
  validates_presence_of :body, :person
  validates_length_of :body, :maximum => 5000
  .
  .
  .
end

Then in the Posts controller we update

  post = @topic.posts.build(params[:post].merge(:person => current_person))

to set the person attribute explicitly:

  post = @topic.posts.build(params[:post])
  post.person = current_person

Bypassing attr_accessible

This fixes the controller action, but unfortunately the corresponding RSpec specs fail. Having a good test suite proved invaluable in fixing the mass assignment problems, but the tests use mass assignment themselves, and much of that code fails. For example, here is part of the Post spec:

spec/models/post_spec.rb

describe ForumPost do
  
  before(:each) do
    @post = topics(:one).build(:body => "Hey there",
                               :person => people(:quentin))
  end
  .
  .
  .
end

This fails because of the attempt to set the person attribute by mass assignment. We could fix this as in the controller:

describe ForumPost do
  
  before(:each) do
    @post = topics(:one).build(:body => "Hey there")
    @post.topics.person = people(:quentin)
  end
  .
  .
  .
end

Unfortunately, the tests are riddled with this sort of code, and it’s a nightmare to make all such changes by hand. Moreover, inside the tests we simply don’t care about mass assignment vulnerabilities, so making a bunch of cumbersome changes is particularly annoying. Luckily, there’s a nice solution; after searching for a bit, I found an inspiring Pastie, which led me to open up ActiveRecord::Base and add some unsafe methods to create Active Record objects that bypass attr_accessible:

config/initializers/unsafe_build_and_create.rb

class ActiveRecord::Base

  # Build and create records unsafely, bypassing attr_accessible.
  # These methods are especially useful in tests and in the console.
  
  def self.unsafe_build(attrs)
    record = new
    record.unsafe_attributes = attrs
    record
  end
  
  def self.unsafe_create(attrs)
    record = unsafe_build(attrs)
    record.save
    record
  end
  
  def self.unsafe_create!(attrs)
    unsafe_build(attrs).save!
  end

  def unsafe_attributes=(attrs)
    attrs.each do |k, v|
      send("#{k}=", v)
    end
  end
end

(By putting in the config/initializers/ directory, we ensure that the additions will be loaded automatically as part of the Rails environment.)

With these methods in hand, we still have to update the tests by hand, but the edits are much simpler (and many can be done by search-and-replace):

describe ForumPost do
  
  before(:each) do
    @post = topics(:one).unsafe_build(:body => "Hey there",
                                      :person => people(:quentin))
  end
  .
  .
  .
end

We can use these methods in the controllers, too, of course, but if we do the word “unsafe” serves as a constant reminder that we’d better be really sure we want to bypass attr_accessible.

After making all the fixes, running our Rake task shows only one potentially vulnerable model:

$ rake find_mass_assignment
/Users/mhartl/rails/insoshi_core/app/controllers/photos_controller.rb
    40      @photo = Photo.new(params[:photo].merge(person_data))
    62        if @photo.update_attributes(:primary => true)

Checking the Photo model, we see that it defines attr_protected instead of attr_accessible (and explains why):

app/models/photo.rb

class Photo < ActiveRecord::Base
  include ActivityLogger
  UPLOAD_LIMIT = 5 # megabytes
  
  # attr_accessible is a nightmare with attachment_fu, so use
  # attr_protected instead.
  attr_protected :id, :person_id, :parent_id, :created_at, :updated_at
  .
  .
  .
end

With that, we’re done, and our application is secure. Huzzah!

Comments (10)

Mass assignment in Rails applications

Filed under: mass assignment, Ruby on Rails — mhartl @ 18:13

This is a brief review of mass assignment in Rails. See the follow-up post on Finding and fixing mass assignment problems in Rails applications for some more tips on how to find and fix mass assignment problems.

We’ll begin with a simple example. Suppose an application has a User model that looks like this:

# == Schema Information
# Table name: users
#
#  id                         :integer(11)     not null, primary key
#  email                      :string(255)     
#  name                       :string(255)     
#  password                   :string(255)
#  admin                      :boolean(1)      not null
class User < ActiveRecord::Base
  validates_presence_of :email, :password
  validates_uniqueness_of :email
  .
  .
  .
end

Note the presence of an admin boolean to identify administrative users. With this model, the Users controller might have this standard update code:

  def update
    @user = User.find(params[:id])

    respond_to do |format|
      if @user.update_attributes(params[:user])
        flash[:notice] = 'User was successfully updated.'
        format.html { redirect_to(@user) }
      else
        format.html { render :action => "edit" }
      end
    end
  end

This works fine, but note that the line

  if @user.update_attributes(params[:user])

performs an update to the @user object through the params hash, assigning all the @user attributes at once—that is, as a mass assignment.

The problem with mass assignment is that some malicious [cr|h]acker might write a script to PUT something like name=New+Name&admin=1, thereby adding himself as an administrative user! This would be a Bad Thing™. The standard solution to this problem is to use attr_accessible in the model to declare explicitly the attributes that can be modified by mass assignment. To protect our User model, for example, we would write

class User < ActiveRecord::Base

  attr_accessible :email, :name, :password

  validates_presence_of :email, :password
  validates_uniqueness_of :email
  .
  .
  .
end

Since :admin isn’t included in the attr_accessible argument list, the User model’s admin attribute is safe from unwanted modification.

This seems simple enough, but the rub is that remembering to protect against mass assignment is difficult. Using mass assignment doesn’t affect the normal operations of the site, so it’s hard to notice the problem. Moreover, although you could shut off mass assignment globally, often there are many models that are used internally and never get modified directly by a web interface. Not being able to use mass assignment for these models is inconvenient, and manually making all attributes attr_accessible is cumbersome and error-prone. So, what’s a Rails developer to do?

Spurred by an email from Eric Chapweske of Slantwise Design, I recently audited the Insoshi social network for mass assignment vulnerabilities. Doing this manually was annoying, so in the process I developed a simple plugin to find likely vulnerabilities automatically, by searching through the controllers for likely mass assignment and then looking in the models to see if they didn’t define attr_accessible. The result is a list of potential trouble spots.

To use the find_mass_assignment plugin, simply install it from GitHub as follows:

$ script/plugin install git://github.com/mhartl/find_mass_assignment.git

(You’ll need Git and Rails 2.1 or later for this to work.) The plugin defines a Rake task to find mass assignment vulnerabilities; running it on the example Users controller from above would yield the following:

$ rake find_mass_assignment

/path/to/app/controllers/users_controller.rb
  5  if @user.update_attributes(params[:user])

This tells us that line 5 in the Users controller has a likely mass assignment vulnerability.

The find_mass_assignment plugin doesn’t fix mass assignment problems automatically, but by making it more convenient to find them I hope it can significantly improve the odds that they will be caught (and fixed!) quickly.

Comments (3)
Older Posts »

Theme: Shocking Blue Green. Blog at WordPress.com.

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.