What scenarios are appropriate for serverless?
What is serverless?
Serverless computing doesn’t mean there isn’t a server involved in running your application — the code still runs on a server somewhere. The distinction is that the application development team no longer needs to concern themselves with managing server infrastructure.
Azure Functions help teams increase their productivity and allow organizations to optimize their resources and focus on delivering solutions.
Serverless computing uses event-triggered stateless containers to host your application or part of your application. Serverless platforms can scale out and in to meet demand as-needed. Platforms like Azure Functions have easy direct access to other Azure services like queues, events, and storage.
What scenarios are appropriate for serverless?
Serverless uses individual short-running functions that are called in response to some trigger. This makes them ideal for processing background tasks.
For example, an application might need to send an email as part of processing a request. Instead of sending the email as part of handling the web request, the details of the email could be placed onto a queue and an Azure Function could be used to pick up the message and send the email. Many different parts of the application, or even many applications, could leverage this same Azure Function, providing improved performance and scalability for the applications and using queue-based load leveling to avoid bottlenecks related to sending the emails.
Although a Publisher/Subscriber pattern between applications and Azure Functions is the most common pattern, other patterns are possible.
Azure Functions can be triggered by other events, such as changes to Azure Blob Storage. An application that supported image uploads could have an Azure Function responsible for creating thumbnail images, or resizing uploaded images to consistent dimensions, or optimizing image size. All of this functionality could be triggered directly by inserts to Azure Blob Storage, keeping the complexity and the workload out of the application itself.
Serverless computing provides a great way to perform slower tasks outside of the user interaction loop, and these tasks can easily scale with demand without requiring the entire application to scale.
When should you avoid serverless?
Serverless computing is best-used for tasks that don’t block the user interface. This means they’re not ideal for hosting web applications or web APIs directly. The main reason for this is that serverless solutions are provisioned and scaled on demand.
When a new instance of a function is needed, referred to as a cold start, it takes time to provision. This time is typically a few seconds, but can be longer depending on a variety of factors. A single instance can often be kept alive indefinitely (for instance, by periodically making a request to it), but the cold start issue remains if the number of instances ever needs to scale up.
If you need to avoid cold starts entirely, you can choose to switch from a consumption plan to a dedicated plan. You can also configure one or more pre-warmed instances with the premium plan so when you need to add another instance, it’s already up and ready to go. These options can mitigate one of the key concerns associated with serverless computing
You should also typically avoid serverless for long-running tasks. They’re best for small pieces of work that can be completed quickly. Most serverless platforms require individual functions to complete within a few minutes
Finally, leveraging serverless for certain tasks within your application adds complexity. It’s often best to architect your application in a modular, loosely coupled manner first, and then identify if there are benefits serverless would offer that make the additional complexity worthwhile