52-weeks-aws-certified-developer-lambda-serverless

52-weeks-aws-certified-developer-lambda-serverless

Author: Noah Gift September 29, 2022 Duration: 24:51

[00:00.000 --> 00:04.560] All right, so I'm here with 52 weeks of AWS
[00:04.560 --> 00:07.920] and still continuing to do developer certification.
[00:07.920 --> 00:11.280] I'm gonna go ahead and share my screen here.
[00:13.720 --> 00:18.720] All right, so we are on Lambda, one of my favorite topics.
[00:19.200 --> 00:20.800] Let's get right into it
[00:20.800 --> 00:24.040] and talk about how to develop event-driven solutions
[00:24.040 --> 00:25.560] with AWS Lambda.
[00:26.640 --> 00:29.440] With Serverless Computing, one of the things
[00:29.440 --> 00:32.920] that it is going to do is it's gonna change
[00:32.920 --> 00:36.000] the way you think about building software
[00:36.000 --> 00:39.000] and in a traditional deployment environment,
[00:39.000 --> 00:42.040] you would configure an instance, you would update an OS,
[00:42.040 --> 00:45.520] you'd install applications, build and deploy them,
[00:45.520 --> 00:47.000] load balance.
[00:47.000 --> 00:51.400] So this is non-cloud native computing and Serverless,
[00:51.400 --> 00:54.040] you really only need to focus on building
[00:54.040 --> 00:56.360] and deploying applications and then monitoring
[00:56.360 --> 00:58.240] and maintaining the applications.
[00:58.240 --> 01:00.680] And so with really what Serverless does
[01:00.680 --> 01:05.680] is it allows you to focus on the code for the application
[01:06.320 --> 01:08.000] and you don't have to manage the operating system,
[01:08.000 --> 01:12.160] the servers or scale it and really is a huge advantage
[01:12.160 --> 01:14.920] because you don't have to pay for the infrastructure
[01:14.920 --> 01:15.920] when the code isn't running.
[01:15.920 --> 01:18.040] And that's really a key takeaway.
[01:19.080 --> 01:22.760] If you take a look at the AWS Serverless platform,
[01:22.760 --> 01:24.840] there's a bunch of fully managed services
[01:24.840 --> 01:26.800] that are tightly integrated with Lambda.
[01:26.800 --> 01:28.880] And so this is another huge advantage of Lambda,
[01:28.880 --> 01:31.000] isn't necessarily that it's the fastest
[01:31.000 --> 01:33.640] or it has the most powerful execution,
[01:33.640 --> 01:35.680] it's the tight integration with the rest
[01:35.680 --> 01:39.320] of the AWS platform and developer tools
[01:39.320 --> 01:43.400] like AWS Serverless application model or AWS SAM
[01:43.400 --> 01:45.440] would help you simplify the deployment
[01:45.440 --> 01:47.520] of Serverless applications.
[01:47.520 --> 01:51.960] And some of the services include Amazon S3,
[01:51.960 --> 01:56.960] Amazon SNS, Amazon SQS and AWS SDKs.
[01:58.600 --> 02:03.280] So in terms of Lambda, AWS Lambda is a compute service
[02:03.280 --> 02:05.680] for Serverless and it lets you run code
[02:05.680 --> 02:08.360] without provisioning or managing servers.
[02:08.360 --> 02:11.640] It allows you to trigger your code in response to events
[02:11.640 --> 02:14.840] that you would configure like, for example,
[02:14.840 --> 02:19.200] dropping something into a S3 bucket like that's an image,
[02:19.200 --> 02:22.200] Nevel Lambda that transcribes it to a different format.
[02:23.080 --> 02:27.200] It also allows you to scale automatically based on demand
[02:27.200 --> 02:29.880] and it will also incorporate built-in monitoring
[02:29.880 --> 02:32.880] and logging with AWS CloudWatch.
[02:34.640 --> 02:37.200] So if you look at AWS Lambda,
[02:37.200 --> 02:39.040] some of the things that it does
[02:39.040 --> 02:42.600] is it enables you to bring in your own code.
[02:42.600 --> 02:45.280] So the code you write for Lambda isn't written
[02:45.280 --> 02:49.560] in a new language, you can write things
[02:49.560 --> 02:52.600] in tons of different languages for AWS Lambda,
[02:52.600 --> 02:57.600] Node, Java, Python, C-sharp, Go, Ruby.
[02:57.880 --> 02:59.440] There's also custom run time.
[02:59.440 --> 03:03.880] So you could do Rust or Swift or something like that.
[03:03.880 --> 03:06.080] And it also integrates very deeply
[03:06.080 --> 03:11.200] with other AWS services and you can invoke
[03:11.200 --> 03:13.360] third-party applications as well.
[03:13.360 --> 03:18.080] It also has a very flexible resource and concurrency model.
[03:18.080 --> 03:20.600] And so Lambda would scale in response to events.
[03:20.600 --> 03:22.880] So you would just need to configure memory settings
[03:22.880 --> 03:24.960] and AWS would handle the other details
[03:24.960 --> 03:28.720] like the CPU, the network, the IO throughput.
[03:28.720 --> 03:31.400] Also, you can use the Lambda,
[03:31.400 --> 03:35.000] AWS Identity and Access Management Service or IAM
[03:35.000 --> 03:38.560] to grant access to what other resources you would need.
[03:38.560 --> 03:41.200] And this is one of the ways that you would control
[03:41.200 --> 03:44.720] the security of Lambda is you have really guardrails
[03:44.720 --> 03:47.000] around it because you would just tell Lambda,
[03:47.000 --> 03:50.080] you have a role that is whatever it is you need Lambda to do,
[03:50.080 --> 03:52.200] talk to SQS or talk to S3,
[03:52.200 --> 03:55.240] and it would specifically only do that role.
[03:55.240 --> 04:00.240] And the other thing about Lambda is that it has built-in
[04:00.560 --> 04:02.360] availability and fault tolerance.
[04:02.360 --> 04:04.440] So again, it's a fully managed service,
[04:04.440 --> 04:07.520] it's high availability and you don't have to do anything
[04:07.520 --> 04:08.920] at all to use that.
[04:08.920 --> 04:11.600] And one of the biggest things about Lambda
[04:11.600 --> 04:15.000] is that you only pay for what you use.
[04:15.000 --> 04:18.120] And so when the Lambda service is idle,
[04:18.120 --> 04:19.480] you don't have to actually pay for that
[04:19.480 --> 04:21.440] versus if it's something else,
[04:21.440 --> 04:25.240] like even in the case of a Kubernetes-based system,
[04:25.240 --> 04:28.920] still there's a host machine that's running Kubernetes
[04:28.920 --> 04:31.640] and you have to actually pay for that.
[04:31.640 --> 04:34.520] So one of the ways that you can think about Lambda
[04:34.520 --> 04:38.040] is that there's a bunch of different use cases for it.
[04:38.040 --> 04:40.560] So let's start off with different use cases,
[04:40.560 --> 04:42.920] web apps, I think would be one of the better ones
[04:42.920 --> 04:43.880] to think about.
[04:43.880 --> 04:46.680] So you can combine AWS Lambda with other services
[04:46.680 --> 04:49.000] and you can build powerful web apps
[04:49.000 --> 04:51.520] that automatically scale up and down.
[04:51.520 --> 04:54.000] And there's no administrative effort at all.
[04:54.000 --> 04:55.160] There's no backups necessary,
[04:55.160 --> 04:58.320] no multi-data center redundancy, it's done for you.
[04:58.320 --> 05:01.400] Backends, so you can build serverless backends
[05:01.400 --> 05:05.680] that lets you handle web, mobile, IoT,
[05:05.680 --> 05:07.760] third-party applications.
[05:07.760 --> 05:10.600] You can also build those backends with Lambda,
[05:10.600 --> 05:15.400] with API Gateway, and you can build applications with them.
[05:15.400 --> 05:17.200] In terms of data processing,
[05:17.200 --> 05:19.840] you can also use Lambda to run code
[05:19.840 --> 05:22.560] in response to a trigger, change in data,
[05:22.560 --> 05:24.440] shift in system state,
[05:24.440 --> 05:27.360] and really all of AWS for the most part
[05:27.360 --> 05:29.280] is able to be orchestrated with Lambda.
[05:29.280 --> 05:31.800] So it's really like a glue type service
[05:31.800 --> 05:32.840] that you're able to use.
[05:32.840 --> 05:36.600] Now chatbots, that's another great use case for it.
[05:36.600 --> 05:40.760] Amazon Lex is a service for building conversational chatbots
[05:42.120 --> 05:43.560] and you could use it with Lambda.
[05:43.560 --> 05:48.560] Amazon Lambda service is also able to be used
[05:50.080 --> 05:52.840] with voice IT automation.
[05:52.840 --> 05:55.760] These are all great use cases for Lambda.
[05:55.760 --> 05:57.680] In fact, I would say it's kind of like
[05:57.680 --> 06:01.160] the go-to automation tool for AWS.
[06:01.160 --> 06:04.160] So let's talk about how Lambda works next.
[06:04.160 --> 06:06.080] So the way Lambda works is that
[06:06.080 --> 06:09.080] there's a function and there's an event source,
[06:09.080 --> 06:10.920] and these are the core components.
[06:10.920 --> 06:14.200] The event source is the entity that publishes events
[06:14.200 --> 06:19.000] to AWS Lambda, and Lambda function is the code
[06:19.000 --> 06:21.960] that you're gonna use to process the event.
[06:21.960 --> 06:25.400] And AWS Lambda would run that Lambda function
[06:25.400 --> 06:29.600] on your behalf, and a few things to consider
[06:29.600 --> 06:33.840] is that it really is just a little bit of code,
[06:33.840 --> 06:35.160] and you can configure the triggers
[06:35.160 --> 06:39.720] to invoke a function in response to resource lifecycle events,
[06:39.720 --> 06:43.680] like for example, responding to incoming HTTP,
[06:43.680 --> 06:47.080] consuming events from a queue, like in the case of SQS
[06:47.080 --> 06:48.320] or running it on a schedule.
[06:48.320 --> 06:49.760] So running it on a schedule is actually
[06:49.760 --> 06:51.480] a really good data engineering task, right?
[06:51.480 --> 06:54.160] Like you could run it periodically to scrape a website.
[06:55.120 --> 06:58.080] So as a developer, when you create Lambda functions
[06:58.080 --> 07:01.400] that are managed by the AWS Lambda service,
[07:01.400 --> 07:03.680] you can define the permissions for the function
[07:03.680 --> 07:06.560] and basically specify what are the events
[07:06.560 --> 07:08.520] that would actually trigger it.
[07:08.520 --> 07:11.000] You can also create a deployment package
[07:11.000 --> 07:12.920] that includes application code
[07:12.920 --> 07:17.000] in any dependency or library necessary to run the code,
[07:17.000 --> 07:19.200] and you can also configure things like the memory,
[07:19.200 --> 07:23.200] you can figure the timeout, also configure the concurrency,
[07:23.200 --> 07:25.160] and then when your function is invoked,
[07:25.160 --> 07:27.640] Lambda will provide a runtime environment
[07:27.640 --> 07:30.080] based on the runtime and configuration options
[07:30.080 --> 07:31.080] that you selected.
[07:31.080 --> 07:36.080] So let's talk about models for invoking Lambda functions.
[07:36.360 --> 07:41.360] In the case of an event source that invokes Lambda function
[07:41.440 --> 07:43.640] by either a push or a pool model,
[07:43.640 --> 07:45.920] in the case of a push, it would be an event source
[07:45.920 --> 07:48.440] directly invoking the Lambda function
[07:48.440 --> 07:49.840] when the event occurs.
[07:50.720 --> 07:53.040] In the case of a pool model,
[07:53.040 --> 07:56.960] this would be putting the information into a stream or a queue,
[07:56.960 --> 07:59.400] and then Lambda would pull that stream or queue,
[07:59.400 --> 08:02.800] and then invoke the function when it detects an events.
[08:04.080 --> 08:06.480] So a few different examples would be
[08:06.480 --> 08:11.280] that some services can actually invoke the function directly.
[08:11.280 --> 08:13.680] So for a synchronous invocation,
[08:13.680 --> 08:15.480] the other service would wait for the response
[08:15.480 --> 08:16.320] from the function.
[08:16.320 --> 08:20.680] So a good example would be in the case of Amazon API Gateway,
[08:20.680 --> 08:24.800] which would be the REST-based service in front.
[08:24.800 --> 08:28.320] In this case, when a client makes a request to your API,
[08:28.320 --> 08:31.200] that client would get a response immediately.
[08:31.200 --> 08:32.320] And then with this model,
[08:32.320 --> 08:34.880] there's no built-in retry in Lambda.
[08:34.880 --> 08:38.040] Examples of this would be Elastic Load Balancing,
[08:38.040 --> 08:42.800] Amazon Cognito, Amazon Lex, Amazon Alexa,
[08:42.800 --> 08:46.360] Amazon API Gateway, AWS CloudFormation,
[08:46.360 --> 08:48.880] and Amazon CloudFront,
[08:48.880 --> 08:53.040] and also Amazon Kinesis Data Firehose.
[08:53.040 --> 08:56.760] For asynchronous invocation, AWS Lambda queues,
[08:56.760 --> 09:00.320] the event before it passes to your function.
[09:00.320 --> 09:02.760] The other service gets a success response
[09:02.760 --> 09:04.920] as soon as the event is queued,
[09:04.920 --> 09:06.560] and if an error occurs,
[09:06.560 --> 09:09.760] Lambda will automatically retry the invocation twice.
[09:10.760 --> 09:14.520] A good example of this would be S3, SNS,
[09:14.520 --> 09:17.720] SES, the Simple Email Service,
[09:17.720 --> 09:21.120] AWS CloudFormation, Amazon CloudWatch Logs,
[09:21.120 --> 09:25.400] CloudWatch Events, AWS CodeCommit, and AWS Config.
[09:25.400 --> 09:28.280] But in both cases, you can invoke a Lambda function
[09:28.280 --> 09:30.000] using the invoke operation,
[09:30.000 --> 09:32.720] and you can specify the invocation type
[09:32.720 --> 09:35.440] as either synchronous or asynchronous.
[09:35.440 --> 09:38.760] And when you use the AWS service as a trigger,
[09:38.760 --> 09:42.280] the invocation type is predetermined for each service,
[09:42.280 --> 09:44.920] and so you have no control over the invocation type
[09:44.920 --> 09:48.920] that these events sources use when they invoke your Lambda.
[09:50.800 --> 09:52.120] In the polling model,
[09:52.120 --> 09:55.720] the event sources will put information into a stream or a queue,
[09:55.720 --> 09:59.360] and AWS Lambda will pull the stream or the queue.
[09:59.360 --> 10:01.000] If it first finds a record,
[10:01.000 --> 10:03.280] it will deliver the payload and invoke the function.
[10:03.280 --> 10:04.920] And this model, the Lambda itself,
[10:04.920 --> 10:07.920] is basically pulling data from a stream or a queue
[10:07.920 --> 10:10.280] for processing by the Lambda function.
[10:10.280 --> 10:12.640] Some examples would be a stream-based event service
[10:12.640 --> 10:17.640] would be Amazon DynamoDB or Amazon Kinesis Data Streams,
[10:17.800 --> 10:20.920] and these stream records are organized into shards.
[10:20.920 --> 10:24.640] So Lambda would actually pull the stream for the record
[10:24.640 --> 10:27.120] and then attempt to invoke the function.
[10:27.120 --> 10:28.800] If there's a failure,
[10:28.800 --> 10:31.480] AWS Lambda won't read any of the new shards
[10:31.480 --> 10:34.840] until the failed batch of records expires or is processed
[10:34.840 --> 10:36.160] successfully.
[10:36.160 --> 10:39.840] In the non-streaming event, which would be SQS,
[10:39.840 --> 10:42.400] Amazon would pull the queue for records.
[10:42.400 --> 10:44.600] If it fails or times out,
[10:44.600 --> 10:46.640] then the message would be returned to the queue,
[10:46.640 --> 10:49.320] and then Lambda will keep retrying the failed message
[10:49.320 --> 10:51.800] until it's processed successfully.
[10:51.800 --> 10:53.600] If the message will expire,
[10:53.600 --> 10:56.440] which is something you can do with SQS,
[10:56.440 --> 10:58.240] then it'll just be discarded.
[10:58.240 --> 11:00.400] And you can create a mapping between an event source
[11:00.400 --> 11:02.960] and a Lambda function right inside of the console.
[11:02.960 --> 11:05.520] And this is how typically you would set that up manually
[11:05.520 --> 11:07.600] without using infrastructure as code.
[11:08.560 --> 11:10.200] All right, let's talk about permissions.
[11:10.200 --> 11:13.080] This is definitely an easy place to get tripped up
[11:13.080 --> 11:15.760] when you're first using AWS Lambda.
[11:15.760 --> 11:17.840] There's two types of permissions.
[11:17.840 --> 11:20.120] The first is the event source and permission
[11:20.120 --> 11:22.320] to trigger the Lambda function.
[11:22.320 --> 11:24.480] This would be the invocation permission.
[11:24.480 --> 11:26.440] And the next one would be the Lambda function
[11:26.440 --> 11:29.600] needs permissions to interact with other services,
[11:29.600 --> 11:31.280] but this would be the run permissions.
[11:31.280 --> 11:34.520] And these are both handled via the IAM service
[11:34.520 --> 11:38.120] or the AWS identity and access management service.
[11:38.120 --> 11:43.120] So the IAM resource policy would tell the Lambda service
[11:43.600 --> 11:46.640] which push event the sources have permission
[11:46.640 --> 11:48.560] to invoke the Lambda function.
[11:48.560 --> 11:51.120] And these resource policies would make it easy
[11:51.120 --> 11:55.280] to grant access to a Lambda function across AWS account.
[11:55.280 --> 11:58.400] So a good example would be if you have an S3 bucket
[11:58.400 --> 12:01.400] in your account and you need to invoke a function
[12:01.400 --> 12:03.880] in another account, you could create a resource policy
[12:03.880 --> 12:07.120] that allows those to interact with each other.
[12:07.120 --> 12:09.200] And the resource policy for a Lambda function
[12:09.200 --> 12:11.200] is called a function policy.
[12:11.200 --> 12:14.160] And when you add a trigger to your Lambda function
[12:14.160 --> 12:16.760] from the console, the function policy
[12:16.760 --> 12:18.680] will be generated automatically
[12:18.680 --> 12:20.040] and it allows the event source
[12:20.040 --> 12:22.820] to take the Lambda invoke function action.
[12:24.400 --> 12:27.320] So a good example would be in Amazon S3 permission
[12:27.320 --> 12:32.120] to invoke the Lambda function called my first function.
[12:32.120 --> 12:34.720] And basically it would be an effect allow.
[12:34.720 --> 12:36.880] And then under principle, if you would have service
[12:36.880 --> 12:41.880] S3.AmazonEWS.com, the action would be Lambda colon
[12:41.880 --> 12:45.400] invoke function and then the resource would be the name
[12:45.400 --> 12:49.120] or the ARN of actually the Lambda.
[12:49.120 --> 12:53.080] And then the condition would be actually the ARN of the bucket.
[12:54.400 --> 12:56.720] And really that's it in a nutshell.
[12:57.560 --> 13:01.480] The Lambda execution role grants your Lambda function
[13:01.480 --> 13:05.040] permission to access AWS services and resources.
[13:05.040 --> 13:08.000] And you select or create the execution role
[13:08.000 --> 13:10.000] when you create a Lambda function.
[13:10.000 --> 13:12.320] The IAM policy would define the actions
[13:12.320 --> 13:14.440] of Lambda functions allowed to take
[13:14.440 --> 13:16.720] and the trust policy allows the Lambda service
[13:16.720 --> 13:20.040] to assume an execution role.
[13:20.040 --> 13:23.800] To grant permissions to AWS Lambda to assume a role,
[13:23.800 --> 13:27.460] you have to have the permission for IAM pass role action.
[13:28.320 --> 13:31.000] A couple of different examples of a relevant policy
[13:31.000 --> 13:34.560] for an execution role and the example,
[13:34.560 --> 13:37.760] the IAM policy, you know,
[13:37.760 --> 13:39.840] basically that we talked about earlier,
[13:39.840 --> 13:43.000] would allow you to interact with S3.
[13:43.000 --> 13:45.360] Another example would be to make it interact
[13:45.360 --> 13:49.240] with CloudWatch logs and to create a log group
[13:49.240 --> 13:51.640] and stream those logs.
[13:51.640 --> 13:54.800] The trust policy would give Lambda service permissions
[13:54.800 --> 13:57.600] to assume a role and invoke a Lambda function
[13:57.600 --> 13:58.520] on your behalf.
[13:59.560 --> 14:02.600] Now let's talk about the overview of authoring
[14:02.600 --> 14:06.120] and configuring Lambda functions.
[14:06.120 --> 14:10.440] So really to start with, to create a Lambda function,
[14:10.440 --> 14:14.840] you first need to create a Lambda function deployment package,
[14:14.840 --> 14:19.800] which is a zip or jar file that consists of your code
[14:19.800 --> 14:23.160] and any dependencies with Lambda,
[14:23.160 --> 14:25.400] you can use the programming language
[14:25.400 --> 14:27.280] and integrated development environment
[14:27.280 --> 14:29.800] that you're most familiar with.
[14:29.800 --> 14:33.360] And you can actually bring the code you've already written.
[14:33.360 --> 14:35.960] And Lambda does support lots of different languages
[14:35.960 --> 14:39.520] like Node.js, Python, Ruby, Java, Go,
[14:39.520 --> 14:41.160] and.NET runtimes.
[14:41.160 --> 14:44.120] And you can also implement a custom runtime
[14:44.120 --> 14:45.960] if you wanna use a different language as well,
[14:45.960 --> 14:48.480] which is actually pretty cool.
[14:48.480 --> 14:50.960] And if you wanna create a Lambda function,
[14:50.960 --> 14:52.800] you would specify the handler,
[14:52.800 --> 14:55.760] the Lambda function handler is the entry point.
[14:55.760 --> 14:57.600] And a few different aspects of it
[14:57.600 --> 14:59.400] that are important to pay attention to,
[14:59.400 --> 15:00.720] the event object,
[15:00.720 --> 15:03.480] this would provide information about the event
[15:03.480 --> 15:05.520] that triggered the Lambda function.
[15:05.520 --> 15:08.280] And this could be like a predefined object
[15:08.280 --> 15:09.760] that AWS service generates.
[15:09.760 --> 15:11.520] So you'll see this, like for example,
[15:11.520 --> 15:13.440] in the console of AWS,
[15:13.440 --> 15:16.360] you can actually ask for these objects
[15:16.360 --> 15:19.200] and it'll give you really the JSON structure
[15:19.200 --> 15:20.680] so you can test things out.
[15:21.880 --> 15:23.900] In the contents of an event object
[15:23.900 --> 15:26.800] includes everything you would need to actually invoke it.
[15:26.800 --> 15:29.640] The context object is generated by AWS
[15:29.640 --> 15:32.360] and this is really a runtime information.
[15:32.360 --> 15:35.320] And so if you needed to get some kind of runtime information
[15:35.320 --> 15:36.160] about your code,
[15:36.160 --> 15:40.400] let's say environmental variables or AWS request ID
[15:40.400 --> 15:44.280] or a log stream or remaining time in Millies,
[15:45.320 --> 15:47.200] like for example, that one would return
[15:47.200 --> 15:48.840] the number of milliseconds that remain
[15:48.840 --> 15:50.600] before your function times out,
[15:50.600 --> 15:53.300] you can get all that inside the context object.
[15:54.520 --> 15:57.560] So what about an example that runs a Python?
[15:57.560 --> 15:59.280] Pretty straightforward actually.
[15:59.280 --> 16:01.400] All you need is you would put a handler
[16:01.400 --> 16:03.280] inside the handler would take,
[16:03.280 --> 16:05.000] that it would be a Python function,
[16:05.000 --> 16:07.080] it would be an event, there'd be a context,
[16:07.080 --> 16:10.960] you pass it inside and then you return some kind of message.
[16:10.960 --> 16:13.960] A few different best practices to remember
[16:13.960 --> 16:17.240] about AWS Lambda would be to separate
[16:17.240 --> 16:20.320] the core business logic from the handler method
[16:20.320 --> 16:22.320] and this would make your code more portable,
[16:22.320 --> 16:24.280] enable you to target unit tests
[16:25.240 --> 16:27.120] without having to worry about the configuration.
[16:27.120 --> 16:30.400] So this is always a really good idea just in general.
[16:30.400 --> 16:32.680] Make sure you have modular functions.
[16:32.680 --> 16:34.320] So you have a single purpose function,
[16:34.320 --> 16:37.160] you don't have like a kitchen sink function,
[16:37.160 --> 16:40.000] you treat functions as stateless as well.
[16:40.000 --> 16:42.800] So you would treat a function that basically
[16:42.800 --> 16:46.040] just does one thing and then when it's done,
[16:46.040 --> 16:48.320] there is no state that's actually kept anywhere
[16:49.320 --> 16:51.120] and also only include what you need.
[16:51.120 --> 16:55.840] So you don't want to have a huge sized Lambda functions
[16:55.840 --> 16:58.560] and one of the ways that you can avoid this
[16:58.560 --> 17:02.360] is by reducing the time it takes a Lambda to unpack
[17:02.360 --> 17:04.000] the deployment packages
[17:04.000 --> 17:06.600] and you can also minimize the complexity
[17:06.600 --> 17:08.640] of your dependencies as well.
[17:08.640 --> 17:13.600] And you can also reuse the temporary runtime environment
[17:13.600 --> 17:16.080] to improve the performance of a function as well.
[17:16.080 --> 17:17.680] And so the temporary runtime environment
[17:17.680 --> 17:22.280] initializes any external dependencies of the Lambda code
[17:22.280 --> 17:25.760] and you can make sure that any externalized configuration
[17:25.760 --> 17:27.920] or dependency that your code retrieves are stored
[17:27.920 --> 17:30.640] and referenced locally after the initial run.
[17:30.640 --> 17:33.800] So this would be limit re-initializing variables
[17:33.800 --> 17:35.960] and objects on every invocation,
[17:35.960 --> 17:38.200] keeping it alive and reusing connections
[17:38.200 --> 17:40.680] like an HTTP or database
[17:40.680 --> 17:43.160] that were established during the previous invocation.
[17:43.160 --> 17:45.880] So a really good example of this would be a socket connection.
[17:45.880 --> 17:48.040] If you make a socket connection
[17:48.040 --> 17:51.640] and this socket connection took two seconds to spawn,
[17:51.640 --> 17:54.000] you don't want every time you call Lambda
[17:54.000 --> 17:55.480] for it to wait two seconds,
[17:55.480 --> 17:58.160] you want to reuse that socket connection.
[17:58.160 --> 18:00.600] A few good examples of best practices
[18:00.600 --> 18:02.840] would be including logging statements.
[18:02.840 --> 18:05.480] This is a kind of a big one
[18:05.480 --> 18:08.120] in the case of any cloud computing operation,
[18:08.120 --> 18:10.960] especially when it's distributed, if you don't log it,
[18:10.960 --> 18:13.280] there's no way you can figure out what's going on.
[18:13.280 --> 18:16.560] So you must add logging statements that have context
[18:16.560 --> 18:19.720] so you know which particular Lambda instance
[18:19.720 --> 18:21.600] is actually occurring in.
[18:21.600 --> 18:23.440] Also include results.
[18:23.440 --> 18:25.560] So make sure that you know it's happening
[18:25.560 --> 18:29.000] when the Lambda ran, use environmental variables as well.
[18:29.000 --> 18:31.320] So you can figure out things like what the bucket was
[18:31.320 --> 18:32.880] that it was writing to.
[18:32.880 --> 18:35.520] And then also don't do recursive code.
[18:35.520 --> 18:37.360] That's really a no-no.
[18:37.360 --> 18:40.200] You want to write very simple functions with Lambda.
[18:41.320 --> 18:44.440] Few different ways to write Lambda actually would be
[18:44.440 --> 18:46.280] that you can do the console editor,
[18:46.280 --> 18:47.440] which I use all the time.
[18:47.440 --> 18:49.320] I like to actually just play around with it.
[18:49.320 --> 18:51.640] Now the downside is that if you don't,
[18:51.640 --> 18:53.800] if you do need to use custom libraries,
[18:53.800 --> 18:56.600] you're not gonna be able to do it other than using,
[18:56.600 --> 18:58.440] let's say the AWS SDK.
[18:58.440 --> 19:01.600] But for just simple things, it's a great use case.
[19:01.600 --> 19:06.080] Another one is you can just upload it to AWS console.
[19:06.080 --> 19:09.040] And so you can create a deployment package in an IDE.
[19:09.040 --> 19:12.120] Like for example, Visual Studio for.NET,
[19:12.120 --> 19:13.280] you can actually just right click
[19:13.280 --> 19:16.320] and deploy it directly into Lambda.
[19:16.320 --> 19:20.920] Another one is you can upload the entire package into S3
[19:20.920 --> 19:22.200] and put it into a bucket.
[19:22.200 --> 19:26.280] And then Lambda will just grab it outside of that S3 package.
[19:26.280 --> 19:29.760] A few different things to remember about Lambda.
[19:29.760 --> 19:32.520] The memory and the timeout are configurations
[19:32.520 --> 19:35.840] that determine how the Lambda function performs.
[19:35.840 --> 19:38.440] And these will affect the billing.
[19:38.440 --> 19:40.200] Now, one of the great things about Lambda
[19:40.200 --> 19:43.640] is just amazingly inexpensive to run.
[19:43.640 --> 19:45.560] And the reason is that you're charged
[19:45.560 --> 19:48.200] based on the number of requests for a function.
[19:48.200 --> 19:50.560] A few different things to remember would be the memory.
[19:50.560 --> 19:53.560] Like so if you specify more memory,
[19:53.560 --> 19:57.120] it's going to increase the cost timeout.
[19:57.120 --> 19:59.960] You can also control the memory duration of the function
[19:59.960 --> 20:01.720] by having the right kind of timeout.
[20:01.720 --> 20:03.960] But if you make the timeout too long,
[20:03.960 --> 20:05.880] it could cost you more money.
[20:05.880 --> 20:08.520] So really the best practices would be test the performance
[20:08.520 --> 20:12.880] of Lambda and make sure you have the optimum memory size.
[20:12.880 --> 20:15.160] Also load test it to make sure
[20:15.160 --> 20:17.440] that you understand how the timeouts work.
[20:17.440 --> 20:18.280] Just in general,
[20:18.280 --> 20:21.640] anything with cloud computing, you should load test it.
[20:21.640 --> 20:24.200] Now let's talk about an important topic
[20:24.200 --> 20:25.280] that's a final topic here,
[20:25.280 --> 20:29.080] which is how to deploy Lambda functions.
[20:29.080 --> 20:32.200] So versions are immutable copies of a code
[20:32.200 --> 20:34.200] in the configuration of your Lambda function.
[20:34.200 --> 20:35.880] And the versioning will allow you to publish
[20:35.880 --> 20:39.360] one or more versions of your Lambda function.
[20:39.360 --> 20:40.400] And as a result,
[20:40.400 --> 20:43.360] you can work with different variations of your Lambda function
[20:44.560 --> 20:45.840] in your development workflow,
[20:45.840 --> 20:48.680] like development, beta, production, et cetera.
[20:48.680 --> 20:50.320] And when you create a Lambda function,
[20:50.320 --> 20:52.960] there's only one version, the latest version,
[20:52.960 --> 20:54.080] dollar sign, latest.
[20:54.080 --> 20:57.240] And you can refer to this function using the ARN
[20:57.240 --> 20:59.240] or Amazon resource name.
[20:59.240 --> 21:00.640] And when you publish a new version,
[21:00.640 --> 21:02.920] AWS Lambda will make a snapshot
[21:02.920 --> 21:05.320] of the latest version to create a new version.
[21:06.800 --> 21:09.600] You can also create an alias for Lambda function.
[21:09.600 --> 21:12.280] And conceptually, an alias is just like a pointer
[21:12.280 --> 21:13.800] to a specific function.
[21:13.800 --> 21:17.040] And you can use that alias in the ARN
[21:17.040 --> 21:18.680] to reference the Lambda function version
[21:18.680 --> 21:21.280] that's currently associated with the alias.
[21:21.280 --> 21:23.400] What's nice about the alias is you can roll back
[21:23.400 --> 21:25.840] and forth between different versions,
[21:25.840 --> 21:29.760] which is pretty nice because in the case of deploying
[21:29.760 --> 21:32.920] a new version, if there's a huge problem with it,
[21:32.920 --> 21:34.080] you just toggle it right back.
[21:34.080 --> 21:36.400] And there's really not a big issue
[21:36.400 --> 21:39.400] in terms of rolling back your code.
[21:39.400 --> 21:44.400] Now, let's take a look at an example where AWS S3,
[21:45.160 --> 21:46.720] or Amazon S3 is the event source
[21:46.720 --> 21:48.560] that invokes your Lambda function.
[21:48.560 --> 21:50.720] Every time a new object is created,
[21:50.720 --> 21:52.880] when Amazon S3 is the event source,
[21:52.880 --> 21:55.800] you can store the information for the event source mapping
[21:55.800 --> 21:59.040] in the configuration for the bucket notifications.
[21:59.040 --> 22:01.000] And then in that configuration,
[22:01.000 --> 22:04.800] you could identify the Lambda function ARN
[22:04.800 --> 22:07.160] that Amazon S3 can invoke.
[22:07.160 --> 22:08.520] But in some cases,
[22:08.520 --> 22:11.680] you're gonna have to update the notification configuration.
[22:11.680 --> 22:14.720] So Amazon S3 will invoke the correct version each time
[22:14.720 --> 22:17.840] you publish a new version of your Lambda function.
[22:17.840 --> 22:21.800] So basically, instead of specifying the function ARN,
[22:21.800 --> 22:23.880] you can specify an alias ARN
[22:23.880 --> 22:26.320] in the notification of configuration.
[22:26.320 --> 22:29.160] And as you promote a new version of the Lambda function
[22:29.160 --> 22:32.200] into production, you only need to update the prod alias
[22:32.200 --> 22:34.520] to point to the latest stable version.
[22:34.520 --> 22:36.320] And you also don't need to update
[22:36.320 --> 22:39.120] the notification configuration in Amazon S3.
[22:40.480 --> 22:43.080] And when you build serverless applications
[22:43.080 --> 22:46.600] as common to have code that's shared across Lambda functions,
[22:46.600 --> 22:49.400] it could be custom code, it could be a standard library,
[22:49.400 --> 22:50.560] et cetera.
[22:50.560 --> 22:53.320] And before, and this was really a big limitation,
[22:53.320 --> 22:55.920] was you had to have all the code deployed together.
[22:55.920 --> 22:58.960] But now, one of the really cool things you can do
[22:58.960 --> 23:00.880] is you can have a Lambda function
[23:00.880 --> 23:03.600] to include additional code as a layer.
[23:03.600 --> 23:05.520] So layer is basically a zip archive
[23:05.520 --> 23:08.640] that contains a library, maybe a custom runtime.
[23:08.640 --> 23:11.720] Maybe it isn't gonna include some kind of really cool
[23:11.720 --> 23:13.040] pre-trained model.
[23:13.040 --> 23:14.680] And then the layers you can use,
[23:14.680 --> 23:15.800] the libraries in your function
[23:15.800 --> 23:18.960] without needing to include them in your deployment package.
[23:18.960 --> 23:22.400] And it's a best practice to have the smaller deployment packages
[23:22.400 --> 23:25.240] and share common dependencies with the layers.
[23:26.120 --> 23:28.520] Also layers will help you keep your deployment package
[23:28.520 --> 23:29.360] really small.
[23:29.360 --> 23:32.680] So for node, JS, Python, Ruby functions,
[23:32.680 --> 23:36.000] you can develop your function code in the console
[23:36.000 --> 23:39.000] as long as you keep the package under three megabytes.
[23:39.000 --> 23:42.320] And then a function can use up to five layers at a time,
[23:42.320 --> 23:44.160] which is pretty incredible actually,
[23:44.160 --> 23:46.040] which means that you could have, you know,
[23:46.040 --> 23:49.240] basically up to a 250 megabytes total.
[23:49.240 --> 23:53.920] So for many languages, this is plenty of space.
[23:53.920 --> 23:56.620] Also Amazon has published a public layer
[23:56.620 --> 23:58.800] that includes really popular libraries
[23:58.800 --> 24:00.800] like NumPy and SciPy,
[24:00.800 --> 24:04.840] which does dramatically help data processing
[24:04.840 --> 24:05.680] in machine learning.
[24:05.680 --> 24:07.680] Now, if I had to predict the future
[24:07.680 --> 24:11.840] and I wanted to predict a massive announcement,
[24:11.840 --> 24:14.840] I would say that what AWS could do
[24:14.840 --> 24:18.600] is they could have a GPU enabled layer at some point
[24:18.600 --> 24:20.160] that would include pre-trained models.
[24:20.160 --> 24:22.120] And if they did something like that,
[24:22.120 --> 24:24.320] that could really open up the doors
[24:24.320 --> 24:27.000] for the pre-trained model revolution.
[24:27.000 --> 24:30.160] And I would bet that that's possible.
[24:30.160 --> 24:32.200] All right, well, in a nutshell,
[24:32.200 --> 24:34.680] AWS Lambda is one of my favorite services.
[24:34.680 --> 24:38.440] And I think it's worth everybody's time
[24:38.440 --> 24:42.360] that's interested in AWS to play around with AWS Lambda.
[24:42.360 --> 24:47.200] All right, next week, I'm going to cover API Gateway.
[24:47.200 --> 25:13.840] All right, see you next week.

If you enjoyed this video, here are additional resources to look at:

Coursera + Duke Specialization: Building Cloud Computing Solutions at Scale Specialization: https://www.coursera.org/specializations/building-cloud-computing-solutions-at-scale

Python, Bash, and SQL Essentials for Data Engineering Specialization: https://www.coursera.org/specializations/python-bash-sql-data-engineering-duke

AWS Certified Solutions Architect - Professional (SAP-C01) Cert Prep: 1 Design for Organizational Complexity:
https://www.linkedin.com/learning/aws-certified-solutions-architect-professional-sap-c01-cert-prep-1-design-for-organizational-complexity/design-for-organizational-complexity?autoplay=true

Essentials of MLOps with Azure and Databricks: https://www.linkedin.com/learning/essentials-of-mlops-with-azure-1-introduction/essentials-of-mlops-with-azure

O'Reilly Book: Implementing MLOps in the Enterprise

O'Reilly Book: Practical MLOps: https://www.amazon.com/Practical-MLOps-Operationalizing-Machine-Learning/dp/1098103017

O'Reilly Book: Python for DevOps: https://www.amazon.com/gp/product/B082P97LDW/

O'Reilly Book: Developing on AWS with C#: A Comprehensive Guide on Using C# to Build Solutions on the AWS Platform
https://www.amazon.com/Developing-AWS-Comprehensive-Solutions-Platform/dp/1492095877

Pragmatic AI: An Introduction to Cloud-based Machine Learning: https://www.amazon.com/gp/product/B07FB8F8QP/

Pragmatic AI Labs Book: Python Command-Line Tools: https://www.amazon.com/gp/product/B0855FSFYZ

Pragmatic AI Labs Book: Cloud Computing for Data Analysis: https://www.amazon.com/gp/product/B0992BN7W8

Pragmatic AI Book: Minimal Python: https://www.amazon.com/gp/product/B0855NSRR7

Pragmatic AI Book: Testing in Python: https://www.amazon.com/gp/product/B0855NSRR7

Subscribe to Pragmatic AI Labs YouTube Channel: https://www.youtube.com/channel/UCNDfiL0D1LUeKWAkRE1xO5Q

Subscribe to 52 Weeks of AWS Podcast: https://52-weeks-of-cloud.simplecast.com

View content on noahgift.com: https://noahgift.com/

View content on Pragmatic AI Labs Website: https://paiml.com/

🔥 Hot Course Offers:

🚀 Level Up Your Career:

Learn end-to-end ML engineering from industry veterans at PAIML.COM


Noah Gift guides you through a year-long journey with 52 Weeks of Cloud, a weekly exploration designed for anyone building, managing, or simply curious about modern cloud infrastructure. Each episode digs into a specific technical topic, moving beyond surface-level explanations to offer practical insights you can apply. You’ll hear detailed discussions on the platforms that power the industry-like AWS, Azure, and Google Cloud-and how to navigate multi-cloud strategies effectively. The conversation regularly delves into the orchestration of these systems with Kubernetes and the specialized world of machine learning operations, or MLOps, including the integration and implications of large language models. This isn't just theory; it's a focused look at the tools and methodologies shaping how software is deployed and scaled today. By committing to this podcast, you're essentially getting a structured, expert-led curriculum that breaks down complex subjects into manageable weekly segments, all aimed at building a comprehensive and practical understanding of the cloud ecosystem.
Author: Language: English Episodes: 225

52 Weeks of Cloud
Podcast Episodes
52 Weeks of AWS Episode 14:  Networking for Solutions Architect [not-audio_url] [/not-audio_url]

Duration: 14:14
If you enjoyed this video, here are additional resources to look at:Coursera + Duke Specialization: Building Cloud Computing Solutions at Scale Specialization: https://www.coursera.org/specializations/building-cloud-comp…
52 Weeks AWS: Episode 11- Solutions Architect Part 3-Databases [not-audio_url] [/not-audio_url]

Duration: 29:37
If you enjoyed this video, here are additional resources to look at:Coursera + Duke Specialization: Building Cloud Computing Solutions at Scale Specialization: https://www.coursera.org/specializations/building-cloud-comp…
52 Weeks of AWS Episode 10-Solutions Architect-Part2-EC2 [not-audio_url] [/not-audio_url]

Duration: 32:07
If you enjoyed this video, here are additional resources to look at:Coursera + Duke Specialization: Building Cloud Computing Solutions at Scale Specialization: https://www.coursera.org/specializations/building-cloud-comp…
52 Weeks of AWS Episode 9B-Solutions Architect [not-audio_url] [/not-audio_url]

Duration: 31:33
Part 1 of solutions architect cert 00:00 Intro 01:30 Overview of Exam 07:00 AWS Architecture 08:00 Well Architected 11:00 Scalability 17:00 AWS Regions 22:00 AWS S3 26:00 AWS Costs 28:00 AWS Snowball and Snowmobile If yo…
52 Weeks of AWS Episode 9:AWS Cookbook [not-audio_url] [/not-audio_url]

Duration: 1:02:40
If you enjoyed this video, here are additional resources to look at:Coursera + Duke Specialization: Building Cloud Computing Solutions at Scale Specialization: https://www.coursera.org/specializations/building-cloud-comp…
52 Weeks of AWS Episode 7: Developing with High Level Services [not-audio_url] [/not-audio_url]

Duration: 30:26
If you enjoyed this video, here are additional resources to look at:Coursera + Duke Specialization: Building Cloud Computing Solutions at Scale Specialization: https://www.coursera.org/specializations/building-cloud-comp…
52 Weeks of AWS Episode 6: Cloud Practitioner Part4 Final [not-audio_url] [/not-audio_url]

Duration: 46:37
# zero-to-five-aws-bootcampA Graduate Level Three to Five Week Bootcamp on AWS. Go from ZERO to FIVE Certifications.## Week 1: AWS Certified Solutions Architect & Cloud Practicioner### Resources#### Slides and Links* [AW…