โ† Back to Course

OCP Series ยท 2 of 5

The Adapter Pattern

Where the adapter pattern comes from, why backends change for reasons unrelated to the feature, and how Rails' ActiveStorage swaps Disk, S3, GCS, and Azure storage without the app knowing which one is in use.

Where this rule comes from

The registry pattern from the previous lesson dispatches between variants at runtime, based on data stored on a record. The adapter pattern is the same idea, applied to a different kind of variation: backends chosen at boot time by configuration. The app does not pick the storage backend on a per-record basis. It picks one for the whole environment, Disk in development, S3 in production, a fake one in tests, and treats it identically everywhere.

The pattern's name is older than Rails by decades. The Gang of Four book in 1994 defined Adapter as "convert the interface of a class into another interface clients expect." The original framing was about reconciling a legacy class with a new client. In modern Rails, the framing has shifted to be about swapping concrete backends behind a stable interface, which is a slightly different use of the pattern with the same name.

Rails has been built around this idea since its earliest days. ActiveRecord has database adapters: PostgreSQL, MySQL, SQLite all implement the same query interface but talk to different databases. ActionMailer has delivery adapters: SMTP, sendmail, a test adapter that captures messages instead of sending them. ActiveJob has queue adapters: Sidekiq, Solid Queue, Resque, and a test adapter. The pattern is so deeply embedded in Rails that most developers use it daily without noticing.

The OCP move the adapter pattern enforces is this: your application code should not know which backend is in use. It calls the same methods regardless of whether the file is in S3 or on a local disk, whether the queue is Sidekiq or Solid Queue, whether the email is going to SMTP or to a test inbox. Swapping backends is a configuration change, not a code change. The application stays closed; the system stays open to new backends.

The anti-pattern

Picture a Rails app where file uploads were added before ActiveStorage existed. The team used the S3 SDK directly. The upload code lives in a model:

class Avatar < ApplicationRecord
  def upload(file)
    s3 = Aws::S3::Client.new(region: "us-east-1")
    s3.put_object(
      bucket: ENV["UPLOAD_BUCKET"],
      key:    "avatars/#{id}/#{file.original_filename}",
      body:   file.read
    )
    update!(url: "https://#{ENV["UPLOAD_BUCKET"]}.s3.amazonaws.com/avatars/#{id}/#{file.original_filename}")
  end

  def download
    s3 = Aws::S3::Client.new(region: "us-east-1")
    response = s3.get_object(bucket: ENV["UPLOAD_BUCKET"], key: key_from_url)
    response.body.read
  end
end

This works. The problem appears six months later when development gets painful. Every developer needs S3 credentials to upload an avatar locally. The test suite uploads real files to a real bucket every CI run. The team wants to use local disk in development and a fake adapter in tests, but the S3 SDK calls are written directly into the model.

Worse, the same pattern is now in twelve files. Profile pictures use S3 calls in Avatar. Product images use S3 calls in Product. Invoice PDFs use S3 calls in Invoice. Each one re-implements the same upload logic, with slightly different bucket-name conventions and slightly different error handling. When the team decides to migrate from S3 to GCS for cost reasons, twelve files need to change at once.

How Rails solves it

ActiveStorage's design is the OCP move applied to file storage. The framework defines a single interface that every storage backend implements:

# rails/activestorage/lib/active_storage/service.rb (the abstract base)
# License: MIT

class ActiveStorage::Service
  def upload(key, io, checksum: nil, **)
    raise NotImplementedError
  end

  def download(key, &block)
    raise NotImplementedError
  end

  def delete(key)
    raise NotImplementedError
  end

  def exist?(key)
    raise NotImplementedError
  end

  def url(key, expires_in:, filename:, content_type:, disposition:)
    raise NotImplementedError
  end
end

The base class is the contract. Every adapter raises NotImplementedError for any method it does not override, so the contract is enforced at runtime. Rails ships four concrete adapters under the same directory:

# rails/activestorage/lib/active_storage/service/disk_service.rb
class ActiveStorage::Service::DiskService < ActiveStorage::Service
  def upload(key, io, checksum: nil, **)
    instrument :upload, key: key, checksum: checksum do
      File.open(make_path_for(key), "wb") { |f| f.write(io.read) }
    end
  end
  # ... downloads from the filesystem, builds local URLs
end

# rails/activestorage/lib/active_storage/service/s3_service.rb
class ActiveStorage::Service::S3Service < ActiveStorage::Service
  def upload(key, io, checksum: nil, content_type: nil, **)
    instrument :upload, key: key, checksum: checksum do
      bucket.object(key).put(body: io, ...)
    end
  end
  # ... downloads from S3, signs S3 URLs
end

# Same shape: ActiveStorage::Service::GCSService, AzureStorageService,
# MirrorService, plus the test adapter

Each adapter implements the same public methods. The internals differ wildly: DiskService talks to the filesystem, S3Service talks to the AWS SDK, GCSService talks to the Google Cloud SDK. The application code does not know which one is loaded.

The adapter is picked by configuration:

# config/storage.yml
local:
  service: Disk
  root: <%= Rails.root.join("storage") %>

amazon:
  service: S3
  access_key_id: <%= Rails.application.credentials.dig(:aws, :access_key_id) %>
  secret_access_key: <%= Rails.application.credentials.dig(:aws, :secret_access_key) %>
  bucket: my-app-uploads
  region: us-east-1

# config/environments/development.rb
config.active_storage.service = :local

# config/environments/production.rb
config.active_storage.service = :amazon

# config/environments/test.rb
config.active_storage.service = :test  # in-memory adapter

The application code is identical across all three environments. user.avatar.attach(io) calls upload on whatever service is configured. The model has no knowledge of S3, of the filesystem, or of test fakes. The storage decision is a configuration change, not a code change.

Why this design holds up

Four benefits, each one solving a specific failure mode of the SDK-in-the-model version.

Development is local, tests are in-memory, production is real. A developer writes feature code on their laptop with the Disk adapter, runs the test suite with the Test adapter that captures uploads in memory, and ships to production where the S3 adapter takes over. No credentials needed for local dev, no real network calls during tests, no code change between environments.

Switching backends is a one-line change. When the team decides to migrate from S3 to GCS, the change is config.active_storage.service = :google. The twelve models that upload files do not change. The backend is replaceable because the application never depended on it.

New backends are additive. When a new storage service appears that Rails does not ship a built-in adapter for, a third-party gem can implement the contract and register it. The gem is a new file; nothing in Rails needs to change. The framework is open to extension, closed to modification.

The contract is the documentation. An engineer who wants to know "what does Rails expect from a storage adapter?" opens active_storage/service.rb and reads thirty lines. The abstract base class is the spec. Every adapter implementation is checked against it, so writing a new adapter is a matter of going method by method.

Registry vs Adapter

The previous lesson covered registries; this one covers adapters. The two patterns look similar, both substitute one class for another behind a stable interface, but they are used at different boundaries.

  • Registry, variant is chosen at runtime, based on data stored on a record. payment.processor is "stripe" for this row, "paypal" for the next. The application has both processors loaded; it dispatches per request.
  • Adapter, variant is chosen at boot, based on configuration. The application loads one storage adapter for its entire lifetime. Switching means restarting.

Use a registry when the variation lives in your data. Use an adapter when the variation lives in your environment. The contract-and-substitution machinery is the same; the dispatching boundary differs.

The principle at play

The Open/Closed Principle is doing real work here. The reason the team would change a storage backend has nothing to do with the reason they would change a User model. Cost, compliance, vendor lock-in, latency, none of those are reasons to edit User. By separating the "what does the model want to do" from "which backend physically does it," each side can change without disturbing the other.

The deeper move is one the Unix tradition has been making since the 1970s: program to an interface, not to an implementation. Every Unix tool reads from "standard input" and writes to "standard output", abstract handles that the operating system maps to files, pipes, terminals, or sockets at the user's discretion. ActiveStorage is the Rails version of the same idea. The application reads and writes from a Service; the operating environment decides what Service that actually means.

The pattern's pragmatic value in Rails is that backends are exactly the thing that changes for reasons unrelated to the feature. The feature ("attach a file to a user") is stable. The backend ("store it on S3") changes because of pricing or regulation or vendor stability. Putting the backend in a swappable adapter means those two timelines no longer collide.

Practice exercise

  1. Grep your app for direct SDK calls in models: grep -rn "Aws::\\|Google::Cloud\\|Stripe::" app/models. Every match is a model coupled to a specific vendor.
  2. For each match, ask: what would happen if you switched vendors next quarter? Count the files that would need to change.
  3. Pick one example. Sketch the contract, what methods would your application call regardless of vendor? Three to six methods is usually enough. Draw the abstract base class, then sketch two concrete adapter classes that implement it.
  4. Bonus: look at how your test suite handles these vendor calls today. Test fakes that mock individual SDK calls are a smell that the vendor abstraction is missing. An adapter would let the test environment swap in a real fake adapter, not stub method calls.