What Is Serverless Computing?
2026-03-19
If you've spent any time around cloud conversations in the last few years, you've heard the word "serverless" thrown around constantly. It sounds like magic — no servers at all! But that's not quite what it means, and misunderstanding the term leads to bad decisions. Let me clear it up.
There Are Still Servers
Let's get this out of the way first. Serverless computing does not mean there are no servers. There are absolutely servers running your code. The difference is that you don't manage them. You don't provision them, patch them, scale them, or worry about capacity planning. The cloud provider handles all of that.
What you do is write functions, deploy them, and let the platform figure out the rest. You focus on business logic. The provider focuses on infrastructure.
That's the deal, and it's a good one for a lot of use cases.
The Event-Driven Model
Serverless computing runs on an event-driven model. Your code doesn't sit there running on a server waiting for something to happen. Instead, it gets invoked when an event occurs.
Events can be anything — an HTTP request hits an API Gateway, a file lands in an S3 bucket, a message arrives in a queue, a database record changes. When the event fires, the platform spins up your function, executes it, and then shuts it down.
This is fundamentally different from traditional hosting where you pay for a server to be running 24/7, whether it's doing work or not. With serverless, if nothing is happening, you're not paying.
The Benefits
Cost Efficiency
You pay for what you use. Not for idle time, not for reserved capacity you might need someday. If your function runs for 200 milliseconds and uses 128MB of memory, that's what you get billed for. The AWS Lambda pricing model breaks this down in detail, and the numbers are compelling — especially with the generous free tier.
For a deeper look at what Lambda actually costs in practice, check out my post on what AWS Lambda costs.
Automatic Scaling
Serverless scales to zero when idle and scales up to thousands of concurrent executions when demand spikes. You don't configure auto-scaling groups or set thresholds. The platform just handles it. This is a huge win for workloads with unpredictable traffic patterns.
Reduced Operational Burden
No OS patches, no security updates for the underlying system, no capacity monitoring. Your operational surface area shrinks dramatically. For small teams and startups, this means your developers spend time building features instead of babysitting infrastructure.
The Downsides
Serverless is not the answer to everything. If someone tells you it is, they're selling something.
Cold Starts
When a function hasn't been invoked recently, the platform needs to spin up a new execution environment. This is called a cold start, and it adds latency — sometimes a few hundred milliseconds, sometimes more depending on your runtime and package size. For APIs where response time matters, this can be a real problem.
There are mitigation strategies like provisioned concurrency, but those add cost and complexity — which somewhat defeats the purpose of going serverless.
Vendor Lock-In
When you build on AWS Lambda, you're building on AWS Lambda. Your function code might be portable, but the event sources, permissions model, deployment configuration, and surrounding services are all AWS-specific. Moving to Azure Functions or Google Cloud Functions is not a simple lift-and-shift. It's a rebuild.
Debugging and Observability
Debugging a distributed set of serverless functions is harder than debugging a monolithic application running on a server you can SSH into. You're relying on CloudWatch logs, X-Ray traces, and third-party tools to understand what happened when something goes wrong. It's gotten better over the years, but it's still a different kind of challenge.
Execution Limits
Lambda functions have a 15-minute maximum execution time. If your workload needs to run longer than that, serverless isn't the right tool. Same goes for workloads that need persistent connections or heavy local storage.
AWS Lambda: The Prime Example
When people talk about serverless, they're usually talking about AWS Lambda. It was one of the first serverless compute services, launching in 2014, and it remains the most widely adopted.
Lambda supports multiple runtimes — Node.js, Python, Java, .NET, Go, Ruby, and custom runtimes via containers. It integrates tightly with the rest of the AWS ecosystem, which is both its greatest strength and the source of vendor lock-in.
I've written about the top 5 Lambda use cases if you want to see where it shines in practice. And if you're getting started with AWS in general, my AWS Cloud Essentials post covers the fundamentals.
Should You Go Serverless?
It depends on the workload. Serverless is excellent for event-driven tasks, API backends with variable traffic, scheduled jobs, and data processing pipelines. It's less suited for long-running processes, latency-sensitive applications that can't tolerate cold starts, or workloads that need persistent state.
Start with a single function solving a real problem. See how the model feels. Then expand from there. That's how most successful serverless architectures get built — one function at a time.