Watch Out! How lambda coldstart is affected by runtime and memory limit

Watch Out! How lambda coldstart is affected by runtime and memory limit

Lambda coldstart time is the increased invocation time whenever there is new container launch either because of inactivity or concurrency. This coldstart impact is also not constant it is highly variable based on runtime, memory limit, code size, and other configurations of lambda. In this post, we will see the impact of different runtime with memory limit on lambda cold start.

Setup and Configuration

Runtime:

  • C# (.Net Core 1.0) – referred as csharp1
  • C# (.Net Core 2.0) – referred as csharp2
  • Java 8 – referred as java
  • golang 1.x -referred as golang
  • python 2 – referred as python2
  • python 3 – referred as python3
  • nodejs 4.3 – referred as nodejs4
  • nodejs 6.10 – referred as nodejs6

Memory Limits:

  • 128 MB
  • 256 MB
  • 512 MB
  • 1024 MB
  • 1536 MB
  • 3000 MB

A total of 48 “hello world” lambda function is created using the serverless framework. Each function is executed 200 times to measure the codestart time. To ensure coldstart every time the functions are re-deployed every time before every invocation.

You can download the setup code from following github repository and do a study yourself. Please keep in mind the study might cost you some dollars, so please run it carefully.

https://github.com/dkkumargoyal/lambda-coldstart-runtime-vs-memory

Analysis: runtime and memory impact on lambda coldstart

Data: Java Invocation comparison for different memory limits

lambda cold-start all invocations java

lambda cold-start all invocations java

 

java lambda invocation aggregate stats

java lambda invocation aggregate stats

 

Java invocations time-based observations:

Parameter Observation
 Range Very High
Consistency Very consistent pattern in each memory limit with very low standard deviation.
Improvement with memory limit Improvement is proportional to memory limit increase till 1024 MB, afterward the improvement is only marginal.
Overlapping Only one instance of overlapping observed when invocation time of 3000 MB range jumps to 1536 MB range.
Other At 3000 MB range invocations seems to standardize with the standard deviation of only 24 ms.

Data: C# (.Net Core1) Invocation comparison for different memory limits

lambda cold-start all invocations csharp1

lambda cold-start all invocations csharp1

csharp1 lambda invocation aggregate stats

csharp1 lambda invocation aggregate stats

csharp1 invocations time-based observations:

Parameter Observation
Range Very High
Consistency Fairly consistent pattern in each memory limit with few instances of the unexpected jump in invocation time.
Improvement with memory limit Improvement is proportional to memory limit increase till 1024 MB, afterward the improvement is not consistent, with a negative improvement in some instances.
Overlapping Multiple instances of overlapping with a pattern of increased overlapping instances with memory limit increase.
Other At 128 MB and 1536 MB range there are few un-expected high invocations.

Data: C# (.Net Core2) Invocation comparison for different memory limits

lambda cold-start all invocations csharp2

lambda cold-start all invocations csharp2

csharp2 lambda invocation aggregate stats

csharp2 lambda invocation aggregate stats

csharp2 invocations time-based observations:

Parameter Observation
Range Very High
Consistency in-consistent pattern in each memory limit especially in memory range 1536 MB and 3000 MB
Improvement with memory limit Overall trend is of decrease time with memory but the proportional consistency is only shown from 128 MB to 256 MB
Overlapping Fairly large instances of overlapping with a pattern of increased overlapping instances with memory limit increase.
Other 1536 MB and 3000 MB memory limits do not seem to justify the increase in memory

 

Data: Nodejs4 Invocation comparison for different memory limits

lambda cold-start all invocations nodejs4

lambda cold-start all invocations nodejs4

nodejs4 lambda invocation aggregate stats

nodejs4 lambda invocation aggregate stats

nodejs4 invocations time-based observations:

Parameter Observation
Range Low
Consistency in-consistent pattern in each memory limit, except 3000 MB range
Improvement with memory limit Very marginal improvement with the increase in memory.
Overlapping Very high, even 128MB and 3000MB ranges collide.
Other At 128MB few instances of very high invocation time in comparison to other instances.

Data: Nodejs6 Invocation comparison for different memory limits

lambda cold-start all invocations nodejs6

lambda cold-start all invocations nodejs6

nodejs6 lambda invocation aggregate stats

nodejs6 lambda invocation aggregate stats

nodejs6 invocations time-based observations:

Parameter Observation
Range Low
Consistency in-consistent pattern in each memory limit except 3000 MB range
Improvement with memory limit Very marginal improvement with the increase in memory. however, the pattern is more clear then nodejs4
Overlapping Very high, even 128MB and 3000MB ranges collide.
Other At 128MB and 256MB few instances of very high invocation time in comparison to other instances.

Data: python2 Invocation comparison for different memory limits

lambda cold-start all invocations python2

lambda coldstart all invocations python2

python2 lambda invocation aggregate stats

python2 lambda invocation aggregate stats

python2 invocations time-based observations:

Parameter Observation
Range Very Low
Consistency in-consistent pattern in each memory limit except 3000 MB range
Improvement with memory limit negligible improvement with the increase in memory only the consistency of variation is improved.
Overlapping Very high
Other 1024 and 1536 invocation pattern is very similar.

Data: python3 Invocation comparison for different memory limits

 

lambda coldstart all invocations python3

lambda coldstart all invocations python3

python3 lambda invocation aggregate stats

python3 lambda invocation aggregate stats

python3 invocations time-based observations:

Parameter Observation
Range Very Low
Consistency similar range in all memory limits except 3000MB which shows a very close grouping.
Improvement with memory limit negligible improvement with the increase in memory only the consistency of variation is improved.
Overlapping Very high
Other on an average little better then python2

Data: golang Invocation comparison for different memory limits

lambda coldstart all invocations golang

lambda coldstart all invocations golang

golang lambda invocation aggregate stats

golang lambda invocation aggregate stats

golang invocations time-based observations:

Parameter Observation
Range Very Very Low
Consistency inconsistent with memory range 128, 256 and 1024, from others good consistent invocation time with very low std deviation.
Improvement with memory limit similar range for 128 and 256 with and good dip on 512 and very small improvement afterward.
Overlapping Very high
Other 2 outliers observer at 256 MB and 1024 MB which I have to adjust while plotting aggregate graph.

Data: comparing data consolation [min, max, avg, median, mean, std deviation, 99%, 95% graph] for different runtime vs memory

Due to the large difference in scale, the graphs are divided into two category

  1. csharp1, csharp2, and java for heaving large invocation time
  2. golang, python2, python3, nodejs4, and nodejs6 for heaving small invocation time.
lambda coldstart max invocation time across lang and memory

lambda coldstart max invocation time across runtime and memory

lambda coldstart max invocation time across lang and memory

lambda coldstart max invocation time across runtime and memory

Observation:

  • max invocation time of csharp2 is more than other runtimes in most of the cases
lambda coldstart min invocation time across lang and memory

lambda coldstart min invocation time across runtime and memory

lambda coldstart min invocation time across lang and memory

lambda coldstart min invocation time across runtime and memory

Observation:

  • min coldstart invocation time recorded for golang, nodejs4, nodejs6, python2, python3 is extremely low.
lambda coldstart median invocation time across lang and memory

lambda coldstart median invocation time across lang and memory

lambda coldstart median invocation time across lang and memory

lambda coldstart median invocation time across runtime and memory

Observation:

  • Java has the highest average time across all runtimes across all memory sizes
  • golang outperforms all runtimes for memory limits more than 256MB
  • python3 has consistent avg invocation time across memory limits
lambda coldstart deviation in invocation time across lang and memory

lambda coldstart deviation at invocation time across lang and memory

lambda coldstart deviation in invocation time across lang and memory

lambda coldstart deviation at invocation time across lang and memory

Observation:

  • python3 has almost consistent standard deviation across memory limits

Conclusion

  • golang outperforms all runtime across memory limits in most of the cases
  • Java, csharp1, and csharp2 has very high coldstart invocation time
  • csharp2 outperform csharp1 across memory limits
  • nodejs6 outperform nodejs4 for larger ( > 256MB) memory size
  • python3 is mostly net to net with python2
  • all runtime shows a very consistent behavior on coldstart invocation time with respect to similar time range with low deviations for 3000 MB memory.

NOTE & REFERENCE

This post is inspired by a similar study done by Yun Cui in the article “aws lambda — compare coldstart time with different languages, memory and code sizes”  the test methodology used is also borrowed from this post with some extension. Hope this new analysis gives some interesting insights. Please mention in the comment section if you see anything missing or want to add something to the analysis.

If you like the analysis you might also like my another analysis post on lambda vs ec2 cost for various use cases.

By |2018-04-30T09:55:40+00:00March 31st, 2018|Analysis, AWS, Lambda|2 Comments

About the Author:

2 Comments

  1. Felix April 24, 2018 at 5:21 pm - Reply

    Is there any chance of getting raw data?

Leave A Comment