Posts

AWS KMS - Basic concepts

Image
Firstly what is it? AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control customer master keys (CMKs), the encryption keys used to encrypt your data. AWS KMS CMKs are protected by hardware security modules (HSMs) that are validated by the FIPS 140-2 Cryptographic Module Validation Program except in the China (Beijing) and China (Ningxia) Regions. So, with AWS KMS you can store your customer master keys securely. What are customer master keys (CMKs) then? Customer master key is the primary resource in AWS KMS (so, it has own ARN). It is a logical representation of a master key. You can create  symmetric and asymmetric CMKs. CMKs never leave AWS infrastructure unencrypted. No one from AWS has access to these guys, only you. Your master keys are stored in such devices (hardware security module ( HSM )): You can read more about the cryptographic details here . Okay, so what can you do with AWS KMS? AWS managed CMKs U...

Managing Secrets in GitLab / Git

Image
Let's say that you have to log in via ssh into an instance, and you work with GitLab, so you want to keep the private key in GitLab somewhere. Is it secure? Let's see! Custom environment variables You can use custom environment variables. Here you can read more about them (Developers cannot change them, only Maintainers and Owners can). There are two types of variables: Variable (the runner creates an environment variable that uses the key for the name and the value for the value) File (the runner creates an environment variable that uses the key for the name. For the value, the runner writes the variable value to a temporary file and uses this path) It seems that we can use File type for our purpose. We can set up it via API or UI . So, let's do that! Go to project's Settings > CI/CD . There will be Variables section (btw, you can specify variables also per group and even for all projects (in admin panel)). Click Add Variable button and add a variable: Key: ...

Terraform - import aws_s3_bucket does not store important attributes like acl

Recently, I had to import some AWS resources to terraform, and most things went smoothly, but some did not. More specifically, I have encountered this problem. And here is my reply how to deal with it now. In this post, I am going to be more elaborate about this issue. So, what exactly I have run into? Here is the code: Such bucket existed and I wanted to import this guy to terraform (the bucket was public). So, I typed terraform import 'aws_s3_bucket.my-bucket' 'my-bucket'  and pressed enter: Wait, what? I understand the force_destroy  argument (it is  false by default), because I had not specified it, but acl ? I have two grant blocks... and according to the documentation , acl conflicts with grant . So, how is it even possible? 🤔 It was tempting to run terraform apply command... so let's do that! And what happened? Terraform (or should I say aws provider?) ignored these grant blocks and removed some ACL (Access control list) records from my bu...

GitLab - Spawn a job with any command you want

The problem : you have many scripts (let's say that they are written in python and you just want to run them typing  python your-script.py ) and sometimes you want to run some of them, sometimes only one, etc. There is no pattern. Additionally you want to trigger these scripts via GitLab API. How can you do this? The first idea: let's create a job for each of them! But... then what? You want to run only a small subset of them and each time this subset might be different ☹️. You might add variables, check them and run only these jobs when IF evaluates to true, like: But, I think that you see the problem of this approach. What about creating a common job without any command? It will be your job to provide a command for script section (for example python run-something-and-upload-to-s3.py ). This way we will have only one job in GitLab and during triggering you must provide a command. The code: 8 lines. Woah! We used rules keyword, because we want to spawn only this job w...

Terraform - Create two buckets in two different regions using meta-argument

Image
Let's say that you provision your AWS resources by Terraform , and mostly you keep everything in Oregon region, but you have some S3 buckets in another region (California for example). How you can deal with that? You can specify a meta-argument provider  for a specific resource! Firstly, you must define two providers (default one for Oregon, and another one for California): The alias is very important, we are going to use it in a minute 🏃‍♂️ Now, let's create two buckets, one in Oregon and another one in California: For bucket in Oregon, we do not have to specify a provider because: By default, Terraform interprets the initial word in the resource type name (separated by underscores) as the local name of a provider, and uses that provider's default configuration. In our example, it is "aws". For bucket in California we must select another provider. And for that we use alias "california" ( aws.california ). That's it, folks.

GitLab - Run the same job on multiple images

Image
Sometimes you want to test your code on different python versions. Using Travis CI it is very easy, but can we do that or something similar in GitLab? Let's see! The problem : there are some tests written in python and we want to test it on different python versions. We can easily use extends keyword, define some common logic in .test  job and then inherit it in other jobs: But... for each new version we have to define a new job. Maybe, we can do better? GitLab has introduced a  parallel keyword. Additionally, there is a  matrix keyword; using this guy you can run a job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job. But, you cannot change image keyword for example 😞. Maybe there is another way? Have you ever heard about dind (Docker-in-Docker)? If not, here is some info about it: Another way to configure GitLab Runner for docker support is to register a runner with the Docker executor and use the...

GitLab - terraform plan and apply

Image
How do you apply changes in terraform ? In most cases you run terraform plan and then terraform apply  and type yes . This approach works great on your local machine, but how to apply changes (and only the changes you want!) in GitLab job where you do not have access to shell? How to do that, when you cannot approve the output of apply command? You can use terraform apply -auto-approve , but it might be risky... No one likes to destroy something on production without a priori knowledge. So, can we run terraform plan , check the output and then run terraform apply  in another step? We can, but still it might be risky operation. Why? Because plan and apply  are separated operations! They know nothing about each other. So, apply  can change something which was not showed in plan . But... according to Terraform Documentation : The optional -out argument can be used to save the generated plan to a file for later execution with terraform apply, which can be useful...