• Home   /  
  • Archive by category "1"

Mass Assignment Authorizer Definition

Early in 2012, a developer, named Egor Homakov, took advantage of a security hole at Github (a Rails app) to gain commit access to the Rails project.

His intent was mostly to point out a common security issue with many Rails apps that results from a feature, known as mass assignment (and did so rather loudly). In this article, we'll review what mass assignment is, how it can be a problem, and what you can do about it in your own applications.

What is Mass Assignment?

To begin, let's first take a look at what mass assignment means, and why it exists. By way of an example, imagine that we have the following class in our application:

# Assume the following fields: [:id, :first, :last, :email] class User < ActiveRecord::Base end

Mass assignment allows us to set a bunch of attributes at once:

attrs = {:first => "John", :last => "Doe", :email => "john.doe@example.com"} user = User.new(attrs) user.first #=> "John" user.last #=> "Doe" user.email #=> "john.doe@example.com"

Without the convenience of mass assignment, we'd have to write an assignment statement for each attribute to achieve the same result. Here's an example:

attrs = {:first => "John", :last => "Doe", :email => "john.doe@example.com"} user = User.new user.first = attrs[:first] user.last = attrs[:last] user.email = attrs[:email] user.first #=> "John" user.last #=> "Doe" user.email #=> "john.doe@example.com"

Obviously, this can get tedious and painful; so we bow at the feet of laziness and say, yes yes, mass assignment is a good thing.

The (Potential) Problem With Mass Assignment

One problem with sharp tools is that you can cut yourself with them.

But wait! One problem with sharp tools is that you can cut yourself with them. Mass assignment is no exception to this rule.

Suppose now that our little imaginary application has acquired the ability to fire missiles. As we don't want the world to turn to ash, we add a boolean permission field to the model to decide who can fire missiles.

class AddCanFireMissilesFlagToUsers < ActiveRecord::Migration def change add_column :users, :can_fire_missiles, :boolean, :default => false end end

Let's also assume that we have a way for users to edit their contact information: this might be a form somewhere that is accessible to the user with text fields for the user's first name, last name, and email address.

Our friend John Doe decides to change his name and update his email account. When he submits the form, the browser will issue a request similar to the following:

PUT http://missileapp.com/users/42?user[first]=NewJohn&user[email]=john.doe@newemail.com

The action within the might look something like:

def update user = User.find(params[:id]) if user.update_attributes(params[:user]) # Mass assignment! redirect_to home_path else render :edit end end

Given our example request, the hash will look similar to:

{:id => 42, :user => {:first => "NewJohn", :email => "john.doe@newemail.com"} # :id - parsed by the router # :user - parsed from the incoming querystring

Now let's say that NewJohn gets a little sneaky. You don't necessarily need a browser to issue an HTTP request, so he writes a script that issues the following request:

PUT http://missileapp.com/users/42?user[can_fire_missiles]=true

Fields, like , , and , are quite easily guessable.

When this request hits our action, the call will see , and give NewJohn the ability to fire missiles! Woe has become us.

This is exactly how Egor Homakov gave himself commit access to the Rails project. Because Rails is so convention-heavy, fields like , , and are quite easily guessable. Further, if there aren't protections in place, you can gain access to things that you're not supposed to be able to touch.

How to Deal With Mass Assignment

So how do we protect ourselves from wanton mass assignment? How do we prevent the NewJohns of the world from firing our missiles with reckless abandon?

Luckily, Rails provides a couple tools to manage the issue: and .

: The BlackList

Using , you can specify which fields may never be mass-ly assignable:

class User < ActiveRecord::Base attr_protected :can_fire_missiles end

Now, any attempt to mass-assign the attribute will fail.

: The WhiteList

The problem with is that it's too easy to forget to add a newly implemented field to the list.

This is where comes in. As you might have guessed, it's the opposite of : only list the attributes that you want to be mass-assignable.

As such, we can switch our class to this approach:

class User < ActiveRecord::Base attr_accessible :first, :last, :email end

Here, we're explicitly listing out what can be mass-assigned. Everything else will be disallowed. The advantage here is that if we, say, add an flag to the model, it will automatically be safe from mass-assignment.

As a general rule, you should prefer to , as it helps you err on the side of caution.

Mass Assignment Roles

Rails 3.1 introduced the concept of mass-assignment "roles". The idea is that you can specify different and lists for different situations.

class User < ActiveRecord::Base attr_accessible :first, :last, :email # :default role attr_accessible :can_fire_missiles, :as => :admin # :admin role end user = User.new({:can_fire_missiles => true}) # uses the :default role user.can_fire_missiles #=> false user2 = User.new({:can_fire_missiles => true}, :as => :admin) user.can_fire_missiles #=> true

Application-wide Configuration

You can control mass assignment behavior in your application by editing the setting within the file.

If set to , mass assignment protection will only be activated for the models where you specify an or list.

If set to , mass assignment will be impossible for all models unless they specify an or list. Please note that this option is enabled by default from Rails 3.2.3 forward.


Beginning with Rails 3.2, there is additionally a configuration option to control the strictness of mass assignment protection: .

If set to , it will raise an any time that your application attempts to mass-assign something it shouldn't. You'll need to handle these errors explicitly. As of v3.2, this option is set for you in the development and test environments (but not production), presumably to help you track down where mass-assignment issues might be.

If not set, it will handle mass-assignment protection silently - meaning that it will only set the attributes it's supposed to, but won't raise an error.

Rails 4 Strong Parameters: A Different Approach

Mass assignment security is really about handling untrusted input.

The Homakov Incident initiated a conversation around mass assignment protection in the Rails community (and onward to other languages, as well); an interesting question was raised: does mass assignment security belong in the model layer?

Some applications have complex authorization requirements. Trying to handle all special cases in the model layer can begin to feel clunky and over-complicated, especially if you find yourself plastering all over the place.

A key insight here is that mass assignment security is really about handling untrusted input. As a Rails application receives user input in the controller layer, developers began wondering whether it might be better to deal with the issue there instead of ActiveRecord models.

The result of this discussion is the Strong Parameters gem, available for use with Rails 3, and a default in the upcoming Rails 4 release.

Assuming that our missile application is bult on Rails 3, here's how we might update it for use with the stong parameters gem:

Add the gem

Add the following line to the Gemfile:

gem strong_parameters

Turn off model-based mass assignment protection

Within :

config.active_record.whitelist_attributes = false

Tell the models about it

class User < ActiveRecord::Base include ActiveModel::ForbiddenAttributesProtection end

Update the controllers

class UsersController < ApplicationController def update user = User.find(params[:id]) if user.update_attributes(user_params) # see below redirect_to home_path else render :edit end end private # Require that :user be a key in the params Hash, # and only accept :first, :last, and :email attributes def user_params params.require(:user).permit(:first, :last, :email) end end

Now, if you attempt something like , you'll get an error in your application. You must first call on the hash with the keys that are allowed for a specific action.

The advantage to this approach is that you must be explicit about which input you accept at the point that you're dealing with the input.

Note: If this was a Rails 4 app, the controller code is all we'd need; the strong parameters functionality will be baked in by default. As a result, you won't need the include in the model or the separate gem in the Gemfile.

Wrapping Up

Mass assignment can be an incredibly useful feature when writing Rails code. In fact, it's nearly impossible to write reasonable Rails code without it. Unfortunately, mindless mass assignment is also fraught with peril.

Hopefully, you're now equipped with the necessary tools to navigate safely in the mass assignment waters. Here's to fewer missiles!

The January/February issue of acmqueue is out now



Weapons of Mass Assignment

Patrick McKenzie, Kalzumeus

A Ruby on Rails app highlights some serious, yet easily avoided, security vulnerabilities.

In May 2010, during a news cycle dominated by users' widespread disgust with Facebook privacy policies, a team of four students from New York University published a request for $10,000 in donations to build a privacy-aware Facebook alternative. The software, Diaspora, would allow users to host their own social networks and own their own data. The team promised to open-source all the code they wrote, guaranteeing the privacy and security of users' data by exposing the code to public scrutiny. With the help of front-page coverage from the New York Times, the team ended up raising more than $200,000. They anticipated launching the service to end users in October 2010.

On September 15, Diaspora released a "pre-alpha developer preview" of its source code. I took a look at it, mostly out of curiosity, and was struck by numerous severe security errors. I spent the next day digging through the code locally and trying to get in touch with the team to address them, privately. The security errors were serious enough to jeopardize the goals of the project.

This article describes the mistakes that compromised the security of the Diaspora developer preview. Avoiding such mistakes via better security practices and better choice of defaults will make applications more secure.

Diaspora Architecture

Diaspora is written against Ruby on Rails 3.0, a popular modern Web framework. Most Rails applications run as very long-lived processes within a specialized Web server such as Mongrel or Thin. Since Rails is not threadsafe, typically several processes will run in parallel on a machine, behind a threaded Web server such as Apache or nginx. These servers serve requests for static assets directly and proxy dynamic requests to the Rails instances.

Architecturally, Diaspora is designed as a federated Web application, with user accounts (seeds) collected into separately operated services (pods), in a manner similar to e-mail accounts on separate mail servers. The primary way end users access their Diaspora accounts is through a Web interface. Pods communicate with each other using encrypted XML messages.

Unlike most Rails applications, Diaspora does not use a traditional database for persistence. Instead, it uses the MongoMapper ORM (object-relational mapping) to interface with MongoDB, which its makers describe as a "document-oriented database" that "bridges the gap between key/value stores and traditional relational databases." MongoDB is an example of what are now popularly called NoSQL databases.

While Diaspora's architecture is somewhat exotic, the problems with the developer preview release stemmed from very prosaic sources.

Security in Ruby on Rails

Web application security is a very broad and deep topic, and is treated in detail in the official Rails security guide and the OWASP (Open Web Application Security Project) list of Web application vulnerabilities, which would have helped catch all of the issues discussed in this article. While Web application security might seem overwhelming, the errors discussed here are elementary and can serve as an object lesson for those building public-facing software.

A cursory analysis of the source code of the Diaspora prerelease revealed on the order of a half-dozen critical errors, affecting nearly every class in the system. There were three main genres, detailed below. All code samples pulled from Diaspora's source at launch [note: I have forked the Diaspora public repository on GitHub and created a tag so that this code can be examined] were reported to the Diaspora team immediately upon discovery, and have been reported by the team as fixed.

Authentication != Authorization: The User Cannot Be Trusted

The basic pattern in the following code was repeated several times in Diaspora's code base: security-sensitive actions on the server used parameters from the HTTP request to identify pieces of data they were to operate on, without checking that the logged-in user was actually authorized to view or operate on that data.

For example, if you were logged in to a Diaspora seed and knew the ID of any photo on the pod, changing the URL of any destroy action visible to include the ID of any other user's photo would let you delete that second photo. Rails makes such exploits very easy, since URLs to actions are trivially easy to guess, and object IDs "leak" all over the place. Do not assume than an object ID is private.

Diaspora, of course, does attempt to check credentials. It uses Devise, a library that handles authentication, to verify that you can get to the destroy action only if you are logged in. As shown in the code example above, however, Devise does not handle authorization—checking to see that you are, in fact, permitted to do the action you are trying to do.


When Diaspora shipped, an attacker with a free account on any Diaspora node had, essentially, full access to any feature of the software vis-à-vis someone else's account. That is quite a serious vulnerability, but it combines with other vulnerabilities in the system to allow attackers to commit more subtle and far-reaching attacks than merely deleting photos.

How to avoid this

Check authorization prior to sensitive actions. The easiest way to do this (aside from using a library to handle it for you) is to take your notion of a logged-in user and access user-specific data only through that. For example, Devise gives all actions access to a object, which is a stand-in for the currently logged-in user. If an action needs to access a photo, it should call . If a malicious user has subverted the params hash (which, since it comes directly from an HTTP request, must be considered "in the hands of the enemy"), that code will find no photo (because of how associations scope to the ). This will instantly generate an ActiveRecord exception, stopping any potential nastiness before it starts.

Mass Assignment Will Ruin Your Day

We have learned that if we forget authorization, then a malicious user can do arbitrary bad things to people. In the following example, since the user update method is insecure, an attacker could meddle with their profiles. But is that all we can do?

Unseasoned developers might assume that an update method can only update things on the Web form prior to it. For example, the form shown in figure 1 is fairly benign, so one might think that all someone can do with this bug is deface the user's profile name and e-mail address.

This is dangerously wrong.

Rails by default uses something called mass update, where and similar methods accept a hash as input and sequentially call all accessors for symbols in the hash. Objects will update both database columns (or their MongoDB analogs) and will call in the hash that has that method defined.


Let's take a look at the Person object in the following code to see what mischief this lets an attacker do. Note that instead of updating the profile, updates the Person: Diaspora's internal notion of the data associated with one human being, as opposed to the login associated with one e-mail address (the User). Calling something when it is really is a good way to hide the security implications of such code from a reviewer. Developers should be careful to name things correctly.

This means that by changing a Person's , one can reassign the Person from one account (User) to another, allowing one not only to deny arbitrary victims their use of the service, but also to take over their accounts. This allows the attacker to impersonate them, access their data at will, etc. This works because the "one" method in MongoDB picks the first matching entry in the DB it can find, meaning that if two Persons have the same , the owning User will nondeterministically control one of them. This lets the attacker assign your to be his , which gives the attacker a 50-50 shot at gaining control of your account.

It gets worse: since the attacker can also reassign his own data's to a nonsense string, this delinks his personal data from his account, which will ensure that his account is linked with the victim's personal data.

It gets worse still. Note the column. If you look deeper into the User class, that is its serialized public/private encryption key pair. Diaspora seeds use encryption when talking with each other so that the prying eyes of Facebook can't read users' status updates. This is Diaspora's core selling point. Unfortunately, an attacker can use the combination of unchecked authorization and mass update silently to overwrite the user's key pair, replacing it with one the user generated. Since the attacker now knows the user's private key, regardless of how well implemented Diaspora's cryptography is, the attacker can read the user's messages at will. This compromises Diaspora's core value proposition to users: that their data will remain safe and in their control.

This is what kills most encryption systems in real life. You don't have to beat encryption to beat the system; you just have to beat the weakest link in the chain around it. That almost certainly isn't the encryption algorithm—it is probably some inadequacy in the larger system added by a developer in the mistaken belief that strong cryptography means strong security. Crypto is not soy sauce for security.

This attack is fairly elementary to execute. It can be done with a tool no more complicated than Firefox with Firebug installed: add an extra parameter to the form, switch the submit URL, and instantly gain control of any account you wish. Of particular note to open source software projects and other scenarios where the attacker can be assumed to have access to the source code, this vulnerability is very visible: the controller in charge of authorization and access to the user objects is a clear priority for attackers because of the expected gains from subverting it. A moderately skilled attacker could find this vulnerability and create a script to weaponize it in a matter of minutes.

How to avoid this

This particular variation of the attack could be avoided by checking authorization, but that does not by itself prevent all related attacks. An attacker can create an arbitrary number of accounts, changing the on each to collide with a victim's legitimate user ID, and in doing so successfully delink the victim's data from his or her login. This amounts to a denial of service attack, since the victim loses the utility of the Diaspora service.

After authentication has been fixed, write access to sensitive data should be limited to the maximum extent practical. A suitable first step would be to disable mass assignment, which should always be turned off in a public-facing Rails app. The Rails team presumably keeps mass assignment on by default because it saves many lines of code and makes the 15-minute blog demo nicer, but it is a security hole in virtually all applications.

Luckily, this is trivial to address: Rails has a mechanism called , which makes only the listed model attributes available for mass assignment. Allowing only safe attributes to be mass-assigned (for example, data you would expect the end users to be allowed to update, such as their names rather than their keys) prevents this class of attack. In addition, documents programmers' assumptions about security explicitly in their application code: as a whitelist, it is a known point of weakness in the model class, and it will be examined thoroughly by any security review process.

This is extraordinarily desirable, so it's a good idea for developers to make using compulsory. This is easy to do: simply call in an initializer, and all Rails models will automatically have mass assignment disabled until they have it explicitly enabled by . Note that this may break the functionality of common Rails gems and plugins, because they sometimes rely on the default. This is one way in which security is a problem of the community.

An additional mitigation method, if your data store allows it, is to explicitly disallow writing to as much data as is feasible. There is almost certainly no legitimate reason for to be reassignable. ActiveRecord lets you do this with . MongoMapper does not currently support this feature, which is one danger of using bleeding-edge technologies for production systems.

NoSQL Doesn't Mean No SQL Injection

The new NoSQL databases have a few decades less experience getting exploited than the old relational databases we know and love, which means that countermeasures against well-understood attacks are still immature. For example, the canonical attack against SQL databases is SQL injection: using the user-exposed interface of an application to craft arbitrary SQL code and execute it against the database.


The previous code snippet allows code injection into MongoDB, effectively allowing an attacker full read access to the database, including to serialized encryption keys. Observe that because of the magic of string interpolation, the attacker can cause the string including the JavaScript to evaluate to virtually anything the attacker desires. For example, the attacker could inject a carefully constructed JavaScript string to cause the first regular expression to terminate without any results, then execute arbitrary code, then comment out the rest of the JavaScript.

We can get one bit of data about any particular person out of this find call—whether the person is in the result set or not. Since we can construct the result set at will, however, we can make that a very significant bit. JavaScript can take a string and convert it to a number. The code for this is left as an exercise for the reader. With that JavaScript, the attacker can run repeated find queries against the database to do a binary search for the serialized encryption key pair:

"Return Patrick if his serialized key is more than 2^512. OK, he isn't in the result set? Alright, return Patrick if his key is more than 2^256. He is in the result set? Return him if his key is more than 2^256 + 2^255. ..."

A key length of 1,024 bits might strike a developer as likely to be very secure. If we are allowed to do a binary search for the key, however, it will take only on the order of 1,000 requests to discover the key. A script executing searches through an HTTP client could trivially run through 1,000 accesses in a minute or two. Compromising the user's key pair in this manner compromises all messages the user has ever sent or will ever send on Diaspora, and it would leave no trace of intrusion aside from an easily overlooked momentary spike in activity on the server. A more patient attacker could avoid leaving even that.

This is probably not the only vulnerability caused by code injection. It is very possible that an attacker could execute state-changing JavaScript through this interface, or join the Person document with other documents to read out anything desired from the database, such as user password hashes. Evaluating whether these attacks are feasible requires in-depth knowledge of the internal workings of MongoDB and the Ruby wrappers for it. Typical application developers are insufficiently skilled to evaluate parts of the stack operating at those levels: it is essentially the same as asking them whether their SQL queries would allow buffer overruns if executed against a database compiled against an exotic architecture. Rather than attempting to answer this question, sensible developers should treat any injection attack as allowing a total system compromise.

How to Avoid This

Do not interpolate strings in queries sent to your database. Use the MongoDB equivalent of prepared statements. If your database solution does not have prepared statements, then it is insufficiently mature to be used in public-facing products.

Be Careful When Releasing Software to End Users

One could reasonably ask whether security flaws in a developer preview are an emergency or merely a footnote in the development history of a product. Owing to the circumstances of its creation, Diaspora never had the luxury of being both publicly available but not yet exploitable. As a highly anticipated project, Diaspora was guaranteed to (and did) have publicly accessible servers available within literally hours of the code being available.

People who set up servers should know enough to evaluate the security consequences of running them. This was not the case with the Diaspora preview: there were publicly accessible Diaspora servers where any user could trivially compromise the account of another user. Moreover, even if one assumes that the server operators understand what they are doing, their users and their users' friends who are invited to join "The New Secure Facebook" are not capable of evaluating their security on Diaspora. They trust that, since it is on their browser and endorsed by a friend, it must be safe and secure. (This is essentially the same process through which they joined Facebook prior to evaluating the privacy consequences of that action.)

The most secure computer system is one that is in a locked room, surrounded by armed guards, and powered off. Unfortunately, that is not a feasible recommendation in the real world: software needs to be developed and used if is to improve the lives of its users. Could Diaspora have simultaneously achieved a public-preview release without exposing end users to its security flaws? Yes. A sensible compromise would have been to release the code with the registration pages elided, forcing developers to add new users only via Rake tasks or the Rails console. That would preserve 100 percent of the ability of developers to work on the project and for news outlets to take screenshots—without allowing technically unsophisticated people to sign up on Diaspora servers.

The Diaspora community has taken some steps to reduce the harm of prematurely deploying the software, but they are insufficient. The team curates a list of public Diaspora seeds, including a bold disclaimer that the software is insecure, but that sort of passive posture does not address the realities of how social software spreads: friends recommend it to friends, and warnings will be unseen or ignored in the face of social pressure to join new sites.

Could Rails Have Prevented These Issues?

Many partisans for languages or framework argue that "their" framework is more secure than alternatives and that some other frameworks are by nature insecure. Insecure code can be written in any language: indeed, given that the question "Is this secure or not?" is algorithmically undecidable (it trivially reduces to the halting problem), one could probably go so far as to say it is flatly impossible to create any useful computer language that will always be secure.

That said, defaults and community matter. Rails embodies a spirit of convention over configuration, an example of what the team at 37signals (the original authors of Rails) describes as "opinionated software." Rails conventions are pervasively optimized for programmer productivity and happiness. This sometimes trades off with security, as in the example of mass assignment being on by default.

Compromises exist on some of these opinions that would make Rails more secure without significantly impeding the development experience. For example, Rails could default to mass assignment being available in development environments, but disabled in production environments (which are, typically, the ones that are accessible by malicious users). There is precedent for this: for example, Rails prints stack traces (which may include sensitive information) only for local requests when in production mode, and gives less informative (and more secure) messages if errors are caused by nonlocal requests.

No amount of improving frameworks, however, will save programmers from mistakes such as forgetting to check authorization prior to destructive actions. This is where the community comes in: the open source community, practicing developers, and educators need to emphasize security as a process. There is no technological silver bullet that makes an application secure: it is made more secure as a result of detailed analysis leading to actions taken to resolve vulnerabilities.

This is often neglected in computer science education, as security is seen as either an afterthought or an implementation detail to be addressed at a later date. Universities often grade like industry: a program that operates successfully on almost all of the inputs scores almost all of the possible points. This mindset, applied to security, has catastrophic results: the attacker has virtually infinite time to interact with the application, sometimes with its source code available, and see how it acts upon particular inputs. In a space of uncountable infinities of program states and possible inputs, the attacker may need to identify only one input for which the program fails to compromise the security of the system.

It would not matter if everything else in Diaspora were perfectly implemented; if the search functionality still allowed code injection, that alone would result in total failure of the project's core goals.

Is Diaspora Secure After The Patches?

Security is a result of a process designed to produce it. While the Diaspora project has continued iterating on the software, and is being made available to select end users as of the publication of this article, it is impossible to say that the architecture and code are definitely secure. This is hardly unique to Diaspora: almost all public-facing software has vulnerabilities, despite huge amounts of resources dedicated to securing popular commercial and open source products.

This is not a reason to despair, though: every error fixed or avoided through improved code, improved practices, and security reviews offers incremental safety to the users of software and increases its utility. We can do better, and we should start doing so.


[email protected]

Patrick McKenzie is the founder of Kalzumeus, a small software business in Ogaki, Japan. His main products—Bingo Card Creator and Appointment Reminder —are both written in Ruby. He graduated from Washington University with a BS/CS in computer science and a BA in East Asian studies in 2004.

© 2011 ACM 1542-7730/11/0300 $10.00

Originally published in Queue vol. 9, no. 3
see this item in the ACM Digital Library



Arvind Narayanan, Jeremy Clark - Bitcoin's Academic Pedigree
The concept of cryptocurrencies is built from forgotten ideas in research literature.

Geetanjali Sampemane - Internal Access Controls
Trust, but Verify

Thomas Wadlow - Who Must You Trust?
You must have some trust if you want to get anything done.

Mike Bland - Finding More Than One Worm in the Apple
If you see something, say something.


(newest first)

Don Park | Wed, 27 Apr 2011 17:25:25 UTC

Weapons of Mass Distraction

I have exclusive access to early-release versions of this paper (not true), and there are many serious, yet easily avoided typos. This paper never had the luxury of being both publicly available and impossible to read at the same time.

** end sarcasm **

A core tenet of open source software and agile development, IMHO, is to release early and often in small increments right from the start. A justification for releasing early when others might deploy it publicly is to look at whats at risk. The updates and photos put on this service have arguably no value. Reading user records for an already encrypted password and email address is probably the most sensitive. The encrypted password has no value so that leave email addresses. A low-enough risk especially when compared to the benefits of getting the source code out to a community of software developers as fast and as early as possible to get development moving as quickly as possible.

Overall, this paper is a great writeup of common mistakes in writing rails code.

Ryan | Wed, 27 Apr 2011 12:52:11 UTC

It's pretty safe to say that the Diaspora developers are (or at least were) pretty much novices when it comes to Rails development - they've made plenty of stupid mistakes which are easily avoidable - infact Rails makes it easy to avoid them!

For example, attr_accessible / attr_protected have been around for years and protect against mass-assignment - even MongoMapper (not part of Rails itself) has implemented them: http://mongomapper.com/documentation/plugins/accessible.html

When it comes to finding objects in the database, they can scope them to a particular user account using something simple like current_user.albums.find(params[:id]). Why they are even using 'find_by_id' instead of just 'find' (which effectively does the same thing and is the most common use) surprises me further.

Of course, since this article was written they may have corrected their mistakes. I certainly hope so because it's embarrassing to the Rails community.

Sandro | Fri, 01 Apr 2011 00:21:50 UTC

Quote: "No amount of improving frameworks, however, will save programmers from mistakes such as forgetting to check authorization prior to destructive actions."

Not entirely true. A web framework based on capability URLs requires no authorization checks because the authorization is encapsulated in the unguessable URL itself. This is the power of combining designation with authorization. See the Waterken server for an example.

Of course, this approach has its own problems, mainly on the user end when interfacing with such URLs.

© 2018 ACM, Inc. All Rights Reserved.

One thought on “Mass Assignment Authorizer Definition

Leave a comment

L'indirizzo email non verrĂ  pubblicato. I campi obbligatori sono contrassegnati *