Infrastructure as Code Concepts
External
- https://infrastructure-as-code.com
- Infrastructure as Code: Dynamic Systems for the Cloud Age by Kief Morris
Internal
- Infrastructure as Code
- Infrastructure Concepts
- Infrastructure Code Testing
- Designing Modular Systems
Overview
Infrastructure is not something you build and forget, in requires constant change: fixing, updating and improving. Infrastructure as Code is a set of technology and engineering practices aimed at delivering change more frequently (or, some would say continuously), quickly and reliably, while improving the overall quality of the system at the same time. Trading speed for quality is a false dichotomy. Used correctly, Infrastructure as Code embeds speed, quality, reliability and compliance into the process of making changes. Changing infrastructure becomes safer. Infrastructure as Code is based on practices from software development, especially test driven development, continuous integration and continuous delivery.
The capability to make changes frequently and reliably is correlated with organizational success. Organizations can't choose between being good at change or being good at stability. They tend to either be good at both or bad at both. (Accelerate by Dr. Nicole Forsgren, Jez Humble and Gene Kim). Changes include adding new services, such as a new database, upgrades, increase of resources to keep up with load, change and tune underlying application runtimes for diagnosis and performance reasons, security patches. Stability comes from the capability of making changes fast and easy: unpatched systems are not stable, they are vulnerable. If you can't fix issues as soon as you discover them, the system is not stable. If you can't recovery from failure quickly, the system is not stable. If the changes you make involve considerable downtime, the system is not stable. If changes frequently fail, the system is not stable. Infrastructure as Code practices help teams perform well against the operational metrics described here.
Automating an existing system is hard. Automation, including automated testing and delivery, should be part of the system's design and implementation, it should evolve organically with the system. You should build the system incrementally, automating as you go.
A few principles to follow when writing infrastructure code:
- Assume systems are unreliable. Cloud platforms run on cheap commodity hardware. Even if they're not, at the scale they run failure happens even when using reliable hardware. This is why the infrastructure platform, and application need to build reliability into software. You must design for uninterrupted service when the underlying resources change.
- Make everything reproducible automatically, without the need on-the-spot decisions about how to build things. Everything should be defined as code: topology, dependencies, configuration. Rebuilding should be a simple "yes/no" decision and running a pipeline instance. Once the system are reproducible, frequent and consistent automation is easier to run. Ensuring state continuity takes special care.
- Avoid snowflake systems. A snowflake is a system or a part of a system that is difficult to rebuild. This happens when people make change to one instance of a system that they don't make to others, causing configuration drift, and the knowledge how to get in into the current state is lost. As consequence, people avoid making changes, leaving it out of date, unpatched or even partially broken.
- Create disposable instances that are easy to discard and replace.
- Minimize variation. More similar things are, the easier to manage. Aim for your system to have as few types of pieces as possible, then automatically instantiate those pieces in as many instances you need, as a consequence of making everything reproducible principle.
- Ensure that you can repeat any process. If you can script a task, script it. If it's worth documenting, it's worth automating. Repeatedly applying the same code safely requires the code to be idempotent.
- Keep configuration simple. Even if we strive to minimize variation, there will aways be a need to configure the infrastructure types. The configurability should be kept in check: configurable code creates opportunity for inconsistency, and more configurable a piece of infrastructure code is, the more difficult is to understand its behavior and to ensure it is tested correctly. As such, minimize the number of configuration parameters. Avoid parameters you "might" need, they can be added later. Prefer simple parameters like numbers, strings, ad most lists and key-value maps. Do not get more complex than that. Avoid using boolean parameters that toggle on or off complex behavior.
Change
High quality systems enable faster change.
Core Practices
Define Everything as Code
Code can be versioned and compared, it can benefit from lessons learned in software design patterns, principles and techniques such as test driven development, continuous integration, continuous delivery or refactoring.
Once a piece of infrastructure has been defined as code, many identically or slightly different instances of it can be created automatically by tools, automatically tested and deployed. Instances built by tools are built the same way every time, which makes the behavior of the system predictable. Moreover, everyone can see how the infrastructure has been defined by reviewing the code. Structure and configuration can be automatically audited for compliance.
If more people work on the same piece of infrastructure, the changes can be continuously integrated and then continuously tested and delivered, as described in the next section.
Continuously Test and Deliver
Continuously test and deliver all infrastructure work in progress.
Infrastructure Code Testing
Continuously testing small pieces encourages a modular, loosely coupled design. It also helps you find problems sooner, then quickly iterate, fix and rebuild the problematic code, which yields better infrastructure. The fact that tests remain with the code base and are continuously exercises as part of CD runs is referred to as "building quality in" rather than "testing quality in".
Infrastructure Code Continuous Delivery
Build Small, Simple, Loosely Coupled Pieces that Can Be Changed Independently
Infrastructure Code
Infrastructure code consists in text files that contain the definition of the infrastructure elements to be created, updated or deleted, and their configuration. As such, infrastructure code is some times referred to as "infrastructure definition". Infrastructure code elements map directly to infrastructure resources and options exposed by the platform API. The code is fed to a tool, which either creates new instances or modifies the existing infrastructure to match the code. Different tools have different names for their source code: Terraform code (.tf
files), CloudFormation templates, Ansible playbooks, etc. One of the essential characteristics of infrastructure code is that the files it is declared in can be visualized, edited and stored independently of the tool that uses them to create the infrastructure: the tool must not be required to review or modify the files. A tool that uses external code for its specifications does not constrain its users to use a specific workflow: they can use instead industry-standard source control systems, editors, CI server and automated testing frameworks.
Declarative Infrastructure Languages
Declarative code defines the desired state of the infrastructure and relies on the tool to handle the logic required to reach the desired state, either if the infrastructure element does not exist at all and needs to be created, or if the state of the infrastructure element just needs to be adjusted. In other words, declarative code specifies what you want, without specifying how to make it happen. As result, the code is cleaner and clearer. The declarative code is closer to configuration than to programming. In addition to being declarative, many infrastructure tools use their own DSL instead of a general purpose declarative language like YAML.
Terraform, CloudFormation, Ansible, Chef and Puppet are based on this concept and use proprietary declarative languages based on DSLs. That makes it possible to writhe code that refers infrastructure platform-specific domain elements like virtual machines, subnets or disk volumes.
Most tools support modularization of declarative code by the use of modules.
Also see:
Imperative Languages for Infrastructure
Declarative code become awkward to use in situations when you want different results depending on circumstances. Every time you need a conditional is one of those situations. The need to repeat the same action a number of times is another one of those situations. Because of this, most declarative infrastructure tools have extended their languages to add imperative programming capabilities. Another category of tools, such as Pulumi or AWS CDK use general-purpose programming language to define infrastructure.
One of the biggest advantage of using a general-purpose programming language is the ecosystem of tools, especially support for testing and refactoring.
Mixing Declarative and Imperative Code
It's not very clear yet whether mixing declarative and imperative code when building different parts of an infrastructure system is good or bad. Also, it is not clear whether one style or another should be mandated. What is a little bit clearer is that declarative and imperative code must not be mixed when implementing the same component. A better approach is defining concerns clearly, and addressing each concern with the best tool, without mixing them. One sign that concerns are being mixed is extending a declarative syntax like YAML to add conditionals and loops. Another sign is mixing configuration data into procedural code - this is an indication that we're mixing what we want with how to implement it. When this happens, code should be split into separate concerns.
Infrastructure Code Management
Infrastructure code should be treated like any other code. It should be designed and managed so that is easy to understand and maintain. Code quality practices, such as code reviews, automated testing and improving cohesion and reducing coupling should be followed. Infrastructure code could double as documentation in some cases, as it is always an accurate and updated record of your system. However, the infrastructure code is rarely the only documentation required. High-level documentation is necessary to understand concepts, context and strategy.
Primitives
Stack
A stack is a collection of infrastructure resources that are defined, changed and managed together as a unit, using a tool like Terraform, Pulumi, CloudFormation or Ansible. All elements declared in the stack are provisioned and destroyed with a single command.
A stack is defined using infrastructure source code. The stack source code, usually living in a stack project, is read by the stack management tool, which uses the cloud platform APIs to instantiate or modify the elements defined by the code. The resources in a stack are provisioned together to create a stack instance. The time to provision, change and test a stack is based on the entire stack and this should be a consideration when deciding on how to combine infrastructure elements in stacks. This subject is addressed in the Stack Patterns section, below. CloudFormation and Pulumi come with a very similar concepts, also named stacks.
The infrastructure stack is an example of an "architectural quantum", defined by Ford, Parsons and Kua as "an independently deployable component with high functional cohesion, which includes all the structural elements required for the system to function correctly". In other words, a stack is a component that can be pushed in production on its own.
A stack can be composed of components, and it may be itself a component used by other stacks.
Code that builds servers should be decoupled by code that builds stacks. A stack should specify what servers to create and pass the information about the environment to a server configuration tool.
Stack Project
A stack project is the source code that declares the infrastructure of a stack. A stack project should define the shape of the stack that is consistent across instances. The stack may be declared in one or more source files.
Stack Instance
A stack definition can be used to provision more than one stack instance. A stack instance is a particular embodiment of a stack, containing actual live infrastructure resources. Changes should not be applied to the stack instance components, on risk of creating configuration drift, but to the stack, and then applied via the infrastructure tool. The stack tool reads the stack definition and, using the platform APIs, ensure the stack elements exists and their state matches the desired state. If the tool is run without making any changes to the code, it should leave the stack instance unmodified. This process is referred to as "applying" the code to an instance.
Reusable Stack
It is convenient to create multiple stack instances based on a single stack definition. This pattern, named "reusable stack", encourages code reuse and helps maintaining the resulting stack instances in sync. A fix or an improvement required by a particular stack instance can be applied to the reusable stack definition the instance was created from, and then applied to all stack instances created based on the same definition.
The reusable stack pattern is only possible if the infrastructure tool supports modules.
Stack Dependencies
Stacks consume resources provided by other stacks. If stack A creates resources stack B depends on, then stack A is a provider for stack B and stack B is a consumer of stack A. Stack B has dependencies created by stack A. A given stack may be both a provider and a consumer, consuming resources from another stack and providing resources to other stacks. In testing, dependencies are often simulated with test doubles. Dependencies are discovered via different mechanisms.
Stack Integration Point
Stack Management Tools
- Terraform
- Terragrunt
- CloudFormation
- Pulumi
- Azure Resource Manger
- Google Cloud Deployment Manager
- OpenStack Heat
Stack Patterns
Monolithic Stack
A monolithic stack contains too many infrastructure elements, making it difficult to manage the stack well. It is very rare when all the infrastructure elements of the entire environment or system need to be managed as a unit. As such, a monolithic stack is an antipattern. A sign that a stack had became a monolithic stack is if multiple teams are making changes to it at the same time.
Application Group Stack
An application group stack includes the infrastructure for multiple related applications or services, which is provisioned and managed as a unit. This pattern makes sense when a single team owns the infrastructure and the deployment of all the pieces of the application group. An application group stack can align the boundaries of the stack to the team's responsibilities. The team needs to manage the risk to the entire stack for every change, even if only one element is changing. This pattern is inefficient if some parts of the stack change more frequently than others.
Service Stack
A service stack declares the infrastructure for an individual application component (service). This pattern applies to microservices. It aligns the infrastructure boundaries to the software that runs on it. This alignment limits the blast radius for a change to one service, which simplifies the process of scheduling changes. The service team can own the infrastructure their service requires, and that encourages team autonomy. Each service has a separate infrastructure code project.
In case more than one services uses similar infrastructure, packaging the common infrastructure elements in shared modules is a better option than duplicating infrastructure code in each service stack.
Micro Stack
The micro stack pattern divides the infrastructure for a single service across multiple stacks (ex. separate stacks for networking, servers and database). The separation lines are related to the lifecycle of the underlying managed state for each infrastructure element. Having multiple different small stacks requires an increased focus on integration points.
Stack Configuration
The idea behind automated provisioning and management of multiple similar stack instances is that the infrastructure resources are specified in a reusable stack which is then applied to create similar, yet slightly different stack instance, depending on their intended use. The difference comes from the need for customization: instances might be configured with different names, different resource settings, or security constraints.
The typical way of dealing with this situation is to 1) use parameters in the stack code and 2) pass values for those parameters as configuration to the tool that applies the stack. One important principle to follow when exposing stack configuration is to keep configuration simple.
Configuration can be passed as command line arguments, environment variables, the content of a configuration file, a special wrapper stack, pipeline parameters, or have the infrastructure tool to read them from a key/value store or other type of central registry.
Command Line Arguments
The configuration parameters can be passed to the infrastructure tool as command line arguments when the tools is executed. Unless used for experimentation, this is an antipattern, as it require human intervention on each run, which is what we are trying to move away from. Besides, manually entered parameters are not suitable for automated CD pipelines.
Scripted Command Line Arguments
A variation of this theme is to write a wrapper script around the tool that contains and groups command line parameters and only run the wrapper script, without any manually entered parameters. This pattern is not suitable for security sensitive configuration, as those values must not hardcoded in scripts. Also, maintain scripts tends to become messy over time.
More ideas on how to use scripts in automation are available in Infrastructure as Code: Dynamic Systems for the Cloud Age by Kief Morris, Chapter 7. Configuring Stack Instances → Patterns for Configuring Stacks → Pattern: Scripted Parameters.
Environment Variables
If the infrastructure tool allows it, the parameter values can be set as environment variables. This implies that the environment variables used in the infrastructure code are automatically translated to their value by the tool. If that is not the case, a wrapper script can read the values from environment and convert them to command line arguments for the infrastructure tool.
The values of all environment variables can be collected into a file that can be applied to the environment as follows:
source ./values.env
This is a variation of the Stack Configuration File pattern, introduced below. However, using environment variables directly in the stack code arguably couples the code too tightly to the runtime environment. This approach is also not appropriate when the configuration is security sensitive, as setting secrets in environment variable may expose them to other processes that run on the same system.
Stack Configuration File
Declare parameter values for each stack instance into a corresponding configuration file (the "stack configuration file") in the stack project. The stack configuration file will be automatically committed to the source repository, as the rest of the stack project. Also see Configuration as Code below.
This pattern comes with a few advantages:
- enforce the separation of configuration from the stack code
- require and help enforce consistent logic in how different instances are created because they can't include logic.
- it makes obvious what values are used for a given environment by scanning of just a small amount of content (they can be even diff-ed)
- provides a history of changes
This approach is also not appropriate when the configuration is security-sensitive, as maintaining secrets source code must be avoided. The pattern must be combined with a method to manage security-sensitive configuration.
One variation of this pattern involves declaring default values in the stack code and then "overlaying" the values from the configuration files, where the values coming from the configuration files take precedence. This establishes a configuration hierarchy and inheritance mechanism. More on configuration layout and hierarchies in Infrastructure as Code: Dynamic Systems for the Cloud Age by Kief Morris, Chapter 7. Configuring Stack Instances → Patterns for Configuring Stacks → Pattern: Stack Configuration Files → Implementation.
Wrapper Stack
The code that describes the infrastructure, which usually changes slower over time, is coded in a stack, or a reusable stack. The configuration, that changes more often over time and across stack instances, is coded in a "wrapper stack". There is one wrapper stack per stack instance. Each wrapper stack defines configuration values for its own stack instance, in the language of the infrastructure tool. All wrapper stacks import the shared reusable stack. There is a separate infrastructure project for each stack instance, which contains the corresponding wrapper stack.
This pattern is only possible if the infrastructure tool supports modules or libraries. Also, it adds an extra layer of complexity between the stack instance and the code that defines it: on one level there is the stack project that contains the wrapper stack, and on the other level there is the component that contains the code for the stack. Since the wrapper stack is written in the same language as the reusable stack, it is theoretically possible to add logic in the wrapper stack - and people will probably be tempted to - but that should be forbidden, because custom stack instance code makes the codebase inconsistent and hard to maintain.
This pattern cannot be used to manage security-sensitive configuration, because the wrapper stack is stored in the code repository.
Pipeline Stack Parameters
Define parameter values for each stack instance in the configuration of the delivery pipeline for the stack instance.
This is kind of an antipattern. By defining stack instance variables in the pipeline configuration, the configuration is coupled with the delivery process. If the coupling is too hard, and no other way to parameterize stack instance creation exist, then it may become hard do develop and test stack code outside the pipeline. It is better to keep the logic and configuration in layers called by the pipeline, rather than in the pipeline configuration.
Stack Parameter Registry
A stack parameter registry, which is a specific use case for a configuration registry, stores parameter values in a central location, rather than via the stack code. The infrastructure tool retrieves relevant values when it applies the stack code to a stack instance.
To avoid coupling the infrastructure tool directly with the configuration registry, there could be a script or other piece of logic that fetches the configuration values from the registry and passes them to the infrastructure tool as normal (command line) parameters. This increases testability of the stack code.
Stack Modularization
See stack components and stacks as components below.
Environment
An environment is a collection of operationally-related applications and infrastructure, organized around a particular purpose, such as to support a specific client (segregation), a specific testing phase, provide service in a geographical region, provide high availability or scalability. Most often, multiple environments exists, each running an instance of the same system, or group of applications. The classical use case for multiple environments is to support a progressive software release process ("path to production"), where a given build of an application is deployed in turn to is the development, test, staging and production environments.
The environments should be defined as code too, which increases consistency across environments. An environment's infrastructure should be defined in a stack or a set of stacks - see Reusable Stack to Implement Environments below. Defining multiple environments in a single stack is an antipattern and it should not be used, for the same reasons presented in the Monolithic Stack section.
Copying and pasting the environment stack definition into multiple projects, each corresponding to an individual environment, is also an antipattern, because the infrastructure code for what are supposed to be similar environments will get soon out of sync and the environment will start to suffer from configuration drift. In the rare cases where you want to maintain and change different instances and aren't worried about code duplication or losing consistency, copy-and-paste might be appropriate.
Environments as Reusable Stack Instances
A single project is used to define the generic structure of an environment, as a reusable stack, which is then used to manage a separate stack instance for each environment.
When a new environment is needed - for example, when a new customer signs on - the infrastructure team uses the environment's reusable stack to create a new instance of the environment. When a new fix or improvement is applied to the code base, the modified stack is applied to a test environment first, then it is rolled out to all existing customers. This is a pattern appropriate for change delivery to multiple production environments. However, if each environment is heavily customized, this pattern is not applicable.
If the environment is large, describing the entire environment in a single stack might start to resemble a monolithic stack, and we all know that a monolithic stack is an antipattern. In this situation, the environment definition should be broken down into a set of application group, service or micro stacks. TO BE CONTINUED.
Managing Differences between Environments
Naturally, there are differences between similar environments, only if in naming. However, differences usually go beyond names and extend to resource configuration, security configuration, etc. In all these cases, the differences should surface as parameters to be provided to the infrastructure tool when creating an environment. This approach is described in the Wrapper Stack section.
Modular Infrastructure
Most of the concepts and techniques used to design modular software systems apply to infrastructure code. Explain modules vs. libraries.
Modules
Modules are a mechanism to reuse declarative code. Modules are components that can be reused, composed, independently tested and shared. CloudFormation has nested stacks and Terraform has modules. In case of a declarative language, complex logic is discouraged, so declarative code modules work best for defining infrastructure components that don't vary very much. Patterns that fit well with declarative languages are facade components.
Libraries
Libraries are a mechanism to reuse imperative code. Libraries, like modules, are components that can be reused, composed, independently tested and shared. Unlike modules, libraries can include more complex logic that dynamically provisions infrastructure, as a consequence of the capabilities of the imperative language they're written in. Pulumi supports modular code by using the underlying programming language modularization features.
Stack Components
Stack components refer to breaking stacks into smaller high-cohesion pieces, to achieve the benefits of modularization. While modularizing the stack, also be aware of the dangers of modularization. Depending on the infrastructure language, the pieces could be modules or libraries.
Facade Component
A facade component creates a simplified interface to a single resource from the infrastructure tool language or the infrastructure platform (when more than one resource is involved, see bundle component below). The component exposes fewer parameters to the calling code. The component passes the parameters to the underlying resource, and hardcodes values for the other parameters needed by the resource. The facade component simplifies and standardizes a common use case for an infrastructure resource, the code involving it is should be simpler and easier to read. The facade limits how you can use the underlying component, and this can be beneficial, but it also limits flexibility. As with any extra layer of code, it adds to overhead to maintaining, debugging and improving the code.
This pattern is appropriate for declarative infrastructure languages, and not so much for imperative infrastructure languages.
Care should be taken so this pattern does not become an "obfuscation component" pattern, which only adds a layer of code without simplifying anything or adding any particular value.
Bundle Component
A bundle component declares a collection of interrelated infrastructure resources with high cohesion with a simplified interface. Those multiple infrastructure resources are usually centered around a core resource. The bundle component is useful to capture knowledge about various elements needed and how to wire them together for a common purpose.
This pattern is appropriate for declarative infrastructure languages and when the resource set involved is fairy static - does not vary much in different use cases. If you find out that you need to create different secondary resources inside the component depending on usage, it's probably better to create several components, one for each us case. In case of imperative languages, this pattern is known as infrastructure domain entity.
Spaghetti Component
An attempt to implement an infrastructure domain entity with a declarative language is called a "spaghetti module". According to Kief Morris, "a spaghetti module is a bundle module that wishes it was a domain entity but descends into madness thanks to the limitations of its declarative language". The presence of many conditionals in declarative code, or if you are having problems to test the component in isolation, that's a sign that you have a spaghetti component. Most spaghetti components are a result of pushing declarative code to implement dynamic logic.
Infrastructure Domain Entity
The infrastructure domain entity is a component that implements a high-level stack component by combining multiple lower-level infrastructure resources. The component is similar to bundle component, but it creates the infrastructure resources dynamically, instead of relying on declarative composition. Infrastructure domain entities are implemented in imperative languages.
An example of high-level concept that could be implemented as an infrastructure domain entity is the infrastructure needed to run an application.
The domain entity is often part of an abstraction layer that emerges when attempting to build infrastructure based on higher-level requirements. Usually an infrastructure platform team build components other teams use to assemble stacks.
Abstraction Layer
An abstraction layer provides a simplified interface to lower-level resources. From this perspective, a set of composable stack components can act as an abstraction layer for underlying resources. The components become in this case infrastructure domain entities and they assemble low-level resources into components that are useful when focusing on higher-level tasks.
An abstraction layer helps separate concerns, exposing some and hiding others, so that people can focus on a problem at a particular level of detail. An abstraction layer might emerge organically as various components are developed, but it's usually useful to have a high-level design and standards so that the components of the layer work well together and fit into a cohesive view of the system.
Open Application Model is an example of an attempt to define a standard architecture that decouples application, runtime and infrastructure.
Stacks as Components
Infrastructure composed of small stacks is more nimble than large stacks built off components (modules and libraries). A small stack can be changed more quickly, easily and safely than a large stack. Building a system from multiple stacks requires keeping each stack cohesive, loosely coupled, and not very large.
Dependency Discovery
Integration between two stacks involves one stack managing a resource that other stack uses as a dependency, so dependency discovery mechanisms are essential to integration. Depending on how stacks find their dependencies and connect to them, they might be tighter or loosely coupled.
Hardcoding
This creates very tight coupling and makes testing with test doubles impossible. It makes it hard to maintain multiple infrastructure instances, as for different environments.
Resource Matching
Consumer stacks dynamically select their dependencies by using names or tags, using either patterns or exact values. This reduces coupling with other stacks, or with specific tools. The provider and consumer stack can be implemented using different tools. The matching pattern becomes a contract between the provider and the consumer. The producer suddenly starts to create resources using a different naming/tagging pattern, the dependency breaks, so the convention should be published and tested, like any other contract.
Using tags should be preferred to naming patterns.
Most stack languages support matching other attributes than the resource name (Terraform data sources, AWS CDK resource importing).
The language should support resource matching at the language level.
external_resource:
id: appserver_vlan
match:
tag: name == "network_tier" && value == "application_servers"
tag: name == "environment" && value == ${ENVIRONMENT_NAME}
Follow up.
Stack Data Lookup
Stack data lookup finds provider resources using data structures maintained by the tool that manages the provider stack, usually referred to as "state files" or "backends". The backend state maintained by the tool includes values exported by the provider stack, which are looked up and used by the consumer stack. See Pulumi backend and stack references, and Terraform backend and output variables.
This pattern works when the stacks to be integrated are managed by the same tool.
Also, this approach requires embedding references to the provider stack in the consumer stack's code. Another option is to use dependency injection. The orchestration logic will look up the output value from the provider stack and inject it into the consumer stack.
Integration Registry Lookup
Both stack refer to a integration registry (or configuration registry) in a known location to store and read values. Using a configuration registry makes the integration points explicit. A consumer stack can only use value explicitly published by a producer stack, so the provider team can freely change how they implement their resources. If used, the registry becomes critical for integration and it must me made available at all times. It requires a clear naming convention. One option is a hierarchical namespace.
Dependency Injection
All dependency discovery patterns presented so far (hardcoding, resource matching, stack data lookup and integration registry lookup) couple the consumer stack to the dependency mechanism. Also see Dependency injection in Spring
Configuration as Code
Integrate with Stack Configuration, Infrastructure Concepts | Configuration.
Security-Sensitive Configuration
Integrate with Infrastructure Concepts | Security-Sensitive Configuration.
State Continuity
Idempotency
One of the infrastructure code principles is to ensure that any process can be repeated. However, repeating the same process and re-applying infrastructure code only works reliably if the code is idempotent. The code must be written in such a way that it ensures the actual state is not modified if it matches the desired state, no matter how many time the code is applied.
Blast Radius
A "blast radius" is the scope of code that the command to apply a change includes. The indirect blast radius include other elements of the system that depend on the resources in the direct blast radius.
Container Clusters as Code
IaC Chapter 14. Building Clusters as Code
Organizatorium