
My 3 biggest takeaways from the Serverless Computing London
Our 3 key takeaways from Serverless Computing London 2018, as seen through the eyes of a cloud engineer.
Serverless is one of our technological pillars. In that context Frederik and I were at Serverless Computing London 2018. As usual London was great, and so was the conference. It was a great opportunity to check out the state of serverless and meet some leaders in this space. Below are some of my key takeaways as seen through the eyes of a cloud engineer.
Security has changed
As we all know security is something we need to keep in mind when evaluating new technologies. Similar to the move from classic servers to containers, the move to serverless also has an impact on where we need to put our focus.
One would assume that without servers there is no more risk involved but this is not the case.
Serverless improves a lot of pain points that some of us might struggle with. Serverless on the other hand will make some other aspects more difficult to manage.
The obvious improvement is that you don’t need to manage the OS layer anymore. Therefore you don’t need to worry about patching OS dependencies like heartbleed, spectre, meltdown, etc. The patching of these leaks are the responsibility of the provider now.
Another improvement is that you are less vulnerable to denial of service attacks. The functions are triggered based on events so there is no need to implement auto scaling or over provision your platform to mitigate these attacks. There are however some concerns to be mindful of: AWS Lambda still has the default limit of 1000 concurrent executions, the backend database can still be the bottleneck and last but not least: you still need to pay for the requests.
While there are no new security problems (at least not while writing this post) there are some new pitfalls that we need to watch out for. Using serverless functions by themselves is quite basic. You can run some code and at the end the function stops. If you want to build an application with it you need to use some services to glue the functions together (for example a firehose, a queue, a third party service,...). In serverless we expose much more data and send it between services and functions. We need to be aware of that and make sure that we secure that data.
For more information about the talk from Guy Podjarny that focussed on these aspects you can check it out here: https://serverlesscomputing.london/sessions/serverless-security-whats-left-protect/
Vendor lock-in is not a bigger issue than it already is
If you are looking at serverless you may see that it involves a lot of services and vendor third party tools to bind your functions together. If you are scared of being locked into a certain cloud provider you could say that going serverless on a cloud provider is a no-go. If you look a bit deeper into the principle of lock-in, every platform or tool that you use has some binding to it. From choosing a programming language to choosing where to run your code. Every decision that you make will have some cost to on- and off-board. When you are aware of this we can take a deeper look at how cloud independent serverless actually is.
During the talk from Avi Deitcher he showcased that you could deploy the same application on AWS, heroku, kubernetes and openfaas without much pain. The conclusion for me about this topic is therefore that it is healthy to be aware of the fact that every decision that we take involves a lock-in in some way.
One of the talks during the serverless conference around this topic was from Avi Deitcher. You can check the talk out here: https://serverlesscomputing.london/sessions/multi-platform-odyssey-running-exact-app-four-different-platforms/
Serverless is still relatively new to the cloud community
During the conference I noticed that there is still a lot of uncharted and unclaimed territory. While AWS is going big on serverless we see that other cloud providers are going big onto providing serverless tools for their customers. This is something that will develop further in the near future and will be interesting to see what these key players will offer to take up the lead.
Tooling around serverless is still in development and still has a long way to go. Currently there are not a lot of logging/monitoring/scanning tools that are made for serverless environments but they are still improving.

My conclusion
One of the biggest advantages to go serverless is the cost. For startups that are not live yet it's quite easy to stay within the free tier limit from Amazon which creates opportunities for these types of customers. They don’t need to pay to have development infrastructure running that doesn’t generate revenue. Therefore you only start paying for your runtime environment when you are receiving traffic.
The serverless ecosystem is a big step forward but it requires a different approach than what you may be used to. The abstraction level is higher and it’s easy to get lost in the jungle of tools and technologies.
While people are starting to adopt serverless more and more it is still new to most of us so it will be interesting what the future will bring…
Personally I think serverless is the way forward. From day 1 your code is infinitely scalable and can handle huge production workloads without any special requirements. This lowers the investments that development companies need to make and therefore enables them to grow.
I'm excited about what the future will bring and I think the best is yet to come.
Want to know what this means for you? We can help. Get in touch with us below!