Frequently Asked Questions

What is Serverless?

Serverless is a term used to describe services that abstract the underlying machine (i.e. the server) and are pay-per-use (i.e. cost nothing when it's not being used). Serverless architecture combine different Serverless services to build highly scalable and systems that scale to zero and require no interaction with the underlying server. While FaaS (Function as a Service) services like AWS Lambda are the poster-child for Serverless it's not just Compute that has Serverless solutions. Databases, messaging systems and many other applications now have Serverless services enabling fully Serverless applications to be built.

What is DynamoDB?

DynamoDB is a completely Serverless NoSQL Database from AWS (Amazon Web Services). It's often used in Serverless architectures for storage and retrieval of data due to its fully Serverless scaling characteristics (scales to zero, elastic scaling, costs nothing when not used). While on its surface it can seem like a simple data storage technology (key - value) there are several advanced patterns for Single Table Design that enable efficient data reads and writes at virtually any scale. In addition, it's API based interface avoids the issue of connection pool constraints often encountered when interfacing Serverless architectures with relational databases.

What is Event Storming?

EventStorming is a workshop that acts as an extension to Domain-Driven Design. In this workshop attendees across the business work to model out an entire business function or process in terms of events. These are typically run in person using an array of different coloured PostIt Notes to model out Events, Entities, Aggregates and more. This enables discussion, boundaries and an overall Event Schema (structure) to be designed between teams. This can be taken further in the EventBridge Storming model by aleios in which services, EventBridge Schema and interfaces are designed.

What is the Minimum Viable Migration Approach?

The Minimum Viable Migration (MVM) framework is a progressive migration framework to migrate from one domain/technology to another. It's typically used to migrate from a legacy system, or a lift-and-shifted architecture, to Serverless. Via an MVM approach vertical slices of an application or migrated from one domain to another, running two systems in parallel throughout the migration. A combination of Edge Routing and MicroFrontends are often leveraged to achieve this strategy.

What are the benefits of Serverless?

At a high level, Serverless reduces Total Cost of Ownership (TCO) and time-to-value (from idea to customers). With Serverless you only pay for what you use, which means it can often be more cost-effective than traditional architectures. Serverless can also provide improved scalability and availability as it automatically scales with demand. Additionally as the cloud provider handles infrastructure management, developers can focus on code around business-logic, which can speed up development times and increase productivity. Lastly, the application can be deployed across multiple locations across the globe, reducing latency for users.

How does Serverless scale to large audiences?

Serverless is naturally highly horizontally scalable on Hyper-scale cloud providers. As the usage of a Serverless application increases, the cloud providers automatically allocates additional resources to handle the load, such as running additional Serverless functions without requiring any manual intervention. Serverless architectures can also benefit from the distributed nature of cloud infrastructure, being able to run in multiple regions. In this way they can are more resilient to outages or performance issues in a single location. Additionally, most cloud providers offer caching or content delivery networks (CDNs) that help to improve performance and reduce latency.

Is Serverless secure?

Security is an important concern for any computing architecture, including Serverless, and is a shared responsibility with the underlying cloud provider. One benefit of Serverless is that many security considerations, such as managing network infrastructure, are abstracted away by the cloud provider. Engineering teams must ensure that their own code is secure and free from vulnerabilities, following security best practices throughout the codebase and leveraging additional security tooling. A good understanding of how cloud & Serverless architectures work is important to maintain a secure application.

How do serverless architectures handle long-running processes?

Serverless services are typically designed to handle short-lived processes, rather than long-running ones. Part of designing a Serverless application may involve breaking up large processes into smaller ones that run independently of each other, this has additional benefits in making your application more resilient & modular as well as reducing latency via parallelisation. To orchestrate these collections of smaller pieces, you can use a service like AWS Step Functions that handle coordination between different services, or you can adopt an event-driven architecture to communicate between them. When a fully Serverless service like AWS Lambda is not suitable there are other less Serverless services that are more tailored to long-running Serverless applications (e.g. AWS Fargate).

Can serverless architectures be used for mission-critical applications?

Serverless is a great fit for mission critical applications due to it's built-in scalability and resilience. But just like any application care must be taken to ensure reliability, availability, security, and performance. To mitigate these risks, it's important to design applications that are resilient to failures, such as by deploying across multiple regions and by building resilient backup systems.

What are some best practices for designing serverless applications?

  1. Serverless architectures are inherently modular. Identify the discrete tasks that need to be performed in your application and design them as independent functions. As a general rule the smaller the function the better - helping make them easier to test, deploy, and manage, and can also improve performance and scalability.
  2. Serverless architectures are well-suited to event-driven applications. Events such as user requests, or adding to a database cam trigger particlar parts of the application to run. By utilising this paradigm, you can create a more loosely coupled and flexible architecture.
  3. Serverless functions can have some latency when starting up if they haven't been used for a while. To minimize this, try to keep individual functions small with minimal dependencies.
  4. You can theoretically do anything using Serverless compute, but that would be incredibly inefficient. Using managed serviced such as databases, storage, and content delivery networks provide bespoke functionality while reducing the amount of code you need to write and maintain.
  5. Design with observability in mind, using tools to track performance, errors, and usage pattern. This will help immensely when diagnosing issues when they occur and making informed architectural decisions.
  6. Build for high distribution and eventual consistency, ensuring you leverage strong Domain-Driven Design techniques (e.g. Event Storming) to avoid creating a huge web of inter-connected complexity in your architecture.

What is Lambda?

AWS Lambda is a compute service provided by Amazon Web Services (AWS). It provides a FaaS (Function-as-a-Service) capability to run apiece of code that can do anything, supporting many of the most popular programming languages, and can vary from small individual pieces of code to calling external endpoints and utilising other cloud services. Developers are able to write and deploy code without needing to worry about underlying server infrastructure, instead they deploy code and specify what would trigger it, and it automatically takes care of running and scaling to usage. Under the hood Lambda leverages the open-source Firecracker microVM virtualisation orchestrator to allocate compute resource at scale.

What are Event-Driven Architectures?

In Event-Driven architectures functionality is driven by events - notifications between parts of the system that indicate something has happened or changed. In this paradigm components are loosely coupled and independent of each other, publishing events to a shared event bus which other components receive notifications from when events are published. This allows different parts of the codebase to react to each other without the need for direct communication. This allows architectures to be independently managed and scaled, while also empowering more agile development, as new code can be added or modified without affecting the existing architecture. This is extremely powerful when combined with the right Team Topology approach.

What are the cost implications of using Serverless?

One of the main advantages of Serverless is that you only pay for the compute time you actually use, rather than paying for dedicated server capacity that may or may not be used. This can lead to significant cost savings for applications with highly variable or unpredictable usage. Applications with high and consistent usage patterns may be more cost-effective to run on dedicated servers.

Beyond, this, it's important to consider the Total Cost of Ownership (TCO). TCO includes not only the direct costs of compute (the actual running of the application) but also other costs such as storage, network bandwidth, development and deployment costs, monitoring and debugging, and ongoing maintenance and support.

Serverless architectures can reduce TCO as it removes the need to manage and maintain infrastructure, as well as reducing development and deployment costs. This extends further when considering how the application scales to demand, as serverless reduces the need for manual capacity planning and generally eliminates the concerns of overprovisioning or underprovisioning resources.

How do I monitor and troubleshoot serverless applications?

Monitoring and troubleshooting serverless applications is often different from traditional applications as they consist of disparate parts working independently. For this reason, it is important to design with observability in mind as otherwise it can be difficult to debug issues that arise. The following should be considered when designing your serverless architecture:

  1. Cloud providers typically offer built-in logging tools that allow you to monitor the output of your application and troubleshoot issues (e.g. Cloudwatch in AWS). You can use these logs to track errors, performance issues, and other events that occur in your application and is an invaluable tool for a development team.
  2. These same logging tools have alerts you can set up. This could, for example, notify you if the number of errors generated by a particular function exceeds a certain threshold, or if theapplication received a sudden spike in traffic.
  3. Tracing tools allow you to track the flow of requests through the different parts of your application (e.g. AWS X-ray), making it easier to identify and how issues occured, and how to troubleshoot them.
  4. Serverless Applications benefit immensely from integration testing, and ensuring that your user-flows have appopriate testing is vital to ensure that your application is working as-intended, serverless or not. Once this is set up, writing tests for serverless applications becomes a relatively easy process and can be embedded into your deployment process.

What benefits can integrating generative AI into my applications provide?

Integrating generative AI into your applications can bring a wide range of benefits, including improving user experience, automating content creation, enhancing personalization and providing personalised customer support. By leveraging Serverless technology, you can achieve these benefits with a cost-effective and scalable underlying infrastructure.

How do you ensure the Serverless integration of LLMs is secure and compliant with industry standards?

The considerations for a serverless appplication are the same for a regular application. Firstly, select a reputable cloud provider with strong security and certifications, and implement robust encryption for data at rest and in transit. You should also enforce strict access control measures, maintain network security using virtual private clouds and firewalls, and continuously monitor and log system activities. Finally, adhere to industry-specific data privacy and compliance requirements such as GDPR if applicable, and develop an incident response plan to effectively handle potential security incidents.

What types of data can be used to augment the foundational Large Language Models (LLMs) in creating new experiences?

The customization of Large Language Models can be achieved using a variety of data sources, including structured data (e.g., spreadsheets, databases), unstructured data (e.g., emails, documents), and semi-structured data (e.g., JSON, XML). The key is to provide the LLM with domain-specific and contextually relevant information that will enable it to generate outputs tailored to your unique requirements.

Got another question?

We're here to help, ask away!
Fill in your details below, or simply give us a call on +442045716968

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.