Posts

Showing posts with the label aws

AWS KMS - Basic concepts

Image
Firstly what is it? AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control customer master keys (CMKs), the encryption keys used to encrypt your data. AWS KMS CMKs are protected by hardware security modules (HSMs) that are validated by the FIPS 140-2 Cryptographic Module Validation Program except in the China (Beijing) and China (Ningxia) Regions. So, with AWS KMS you can store your customer master keys securely. What are customer master keys (CMKs) then? Customer master key is the primary resource in AWS KMS (so, it has own ARN). It is a logical representation of a master key. You can create  symmetric and asymmetric CMKs. CMKs never leave AWS infrastructure unencrypted. No one from AWS has access to these guys, only you. Your master keys are stored in such devices (hardware security module ( HSM )): You can read more about the cryptographic details here . Okay, so what can you do with AWS KMS? AWS managed CMKs Using  AWS managed C

Managing Secrets in GitLab / Git

Image
Let's say that you have to log in via ssh into an instance, and you work with GitLab, so you want to keep the private key in GitLab somewhere. Is it secure? Let's see! Custom environment variables You can use custom environment variables. Here you can read more about them (Developers cannot change them, only Maintainers and Owners can). There are two types of variables: Variable (the runner creates an environment variable that uses the key for the name and the value for the value) File (the runner creates an environment variable that uses the key for the name. For the value, the runner writes the variable value to a temporary file and uses this path) It seems that we can use File type for our purpose. We can set up it via API or UI . So, let's do that! Go to project's Settings > CI/CD . There will be Variables section (btw, you can specify variables also per group and even for all projects (in admin panel)). Click Add Variable button and add a variable: Key:

Terraform - import aws_s3_bucket does not store important attributes like acl

Recently, I had to import some AWS resources to terraform, and most things went smoothly, but some did not. More specifically, I have encountered this problem. And here is my reply how to deal with it now. In this post, I am going to be more elaborate about this issue. So, what exactly I have run into? Here is the code: Such bucket existed and I wanted to import this guy to terraform (the bucket was public). So, I typed terraform import 'aws_s3_bucket.my-bucket' 'my-bucket'  and pressed enter: Wait, what? I understand the force_destroy  argument (it is  false by default), because I had not specified it, but acl ? I have two grant blocks... and according to the documentation , acl conflicts with grant . So, how is it even possible? 🤔 It was tempting to run terraform apply command... so let's do that! And what happened? Terraform (or should I say aws provider?) ignored these grant blocks and removed some ACL (Access control list) records from my bu

Terraform - Create two buckets in two different regions using meta-argument

Image
Let's say that you provision your AWS resources by Terraform , and mostly you keep everything in Oregon region, but you have some S3 buckets in another region (California for example). How you can deal with that? You can specify a meta-argument provider  for a specific resource! Firstly, you must define two providers (default one for Oregon, and another one for California): The alias is very important, we are going to use it in a minute 🏃‍♂️ Now, let's create two buckets, one in Oregon and another one in California: For bucket in Oregon, we do not have to specify a provider because: By default, Terraform interprets the initial word in the resource type name (separated by underscores) as the local name of a provider, and uses that provider's default configuration. In our example, it is "aws". For bucket in California we must select another provider. And for that we use alias "california" ( aws.california ). That's it, folks.

You have reached your pull rate limit!

Ahh, yes!  You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit I bet you had encountered this problem if you read now this post. According to this document : Beginning November 2, 2020, progressive enforcement of rate limits for anonymous and authenticated Docker Hub usage goes into effect. This means that anonymous and free Docker Hub users will have usage restrictions gradually placed on container image pull requests. Sadly, this happened on our clusters (AWS EKS). How to fix it? I wanted to spawn DaemonSet object where I could run docker login command and this way change config.json  on every node. But, after that you need to restart docker  process on every node, and I still do not know how to do that on AWS EKS. So, a temporary fix was to create a Secret object and then link it to every ServiceAccount  object. It is "a hack", but we needed very fast working solution. We are not g

DynamoDB in pytest-dbfixtures

pytest-dbfixtures If you use pytest maybe you have heard about pytest-dbfixtures : Pytest dbfixtures is a pytest plugin that makes it a lot easier to set up proper database or storage engine for testing. Simply use one of provided fixtures that start predefined clean database server for your tests or creates server more tailored for your application by using one of provided factories. This plugin is very useful if you have integration tests in your project, and you want to perform tests on a database for example. You will find information how to use it in the documentation . Currently, the plugin supports: Postgresql MySQL Redis Mongo Elasticsearch RabbitMQ And recently, we have added support for DynamoDB . Here , you will find how to run DynamoDB on your computer. And, here we are. If you want to use it in production, you want to test it locally. dynamodb fixture If you still do not use pytest, go to the pytest page and read how to use it, now. If you want to

AWS Lambda and Python

Image
What is the AWS Lambda? AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running. So, you can build a scalable application without managing servers. Something like a microservice without servers. Serverless! Hence, your function must be written in a stateless style. If you want to store something somewhere, you can connect to S3, Redshift, DynamoDB, etc. AWS Lambda will start to execute your code within milliseconds. I am not going to write a tutorial step by step. You will find here only a handful necessary information. Limits Let's look at the limits (all limits are default, you can ask guys from AWS to increase them). You have access to an ephemeral disk with limit 512 MB (access to /tmp only). Your function must be finished within 300 seconds (it is a default max value; if you want you can set 10 seconds also); if not AWS Lambda will terminate it. Zip