Amazon API Gateway Concepts
External
Internal
REST and Hypermedia Concepts
Amazon API Gateway
Amazon API Gateway is the preferred way to expose internal AWS endpoints to external clients, in form of a consistent and scalable programming REST interface (REST API). Amazon API Gateway can expose the following integration endpoints: internal HTTP(S) endpoints - representing custom services, AWS Lambda functions and other AWS services, such as Amazon Kinesis or Amazon S3. The backend endpoints are exposed by creating an API Gateway REST API, which is the instantiation of a RestApi object, and integrating API methods with their corresponding backend endpoints, using specific integration types.
Primitives
API
An API is a collection of HTTP resources and methods that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. This collection can be deployed in one or more stages. Typically, API resources are organized in a resource tree according to the application logic. Each API resource can expose one or more API methods that have unique HTTP verbs supported by API Gateway. Each API is represented by an RestApi instance.
API Name
The API name is the human-readable name the API is listed under in the AWS console. If the API was built from OpenAPI metadata, the name is given by the value of the info.title element. More than one API may have the same name.
API ID
The REST API ID is part of various API elements' ARNs.
API Type
Regional API
A regional API is deployed in the current region.
Edge-Optimized API
An edge-optimized API is deployed to the Amazon CloudFront network.
Private API
A private API is only accessible from the VPC. This is different from private integration, where the endpoints are deployed in a private VPC, but they are exposed publicly by the API.
API Deployment
An API Deployment is a point-in-time snapshot of an API, encapsulating the resources, methods and the other API elements as they exist in the API at the moment of the deployment. An API deployment cannot be directly used by the clients, it must be associated with and exposed through a stage. Conceptually, a deployment is like an executable of an API: once a RestApi is created, it requires deployment and association with a stage to make it callable by its users. API deployments cannot be viewed directly in the API Gateway Console, they are accessible though a stage. Can they be listed via AWS CLI, though?
Every time an API is modified in such a way that one of its methods, integrations, routes, authorizers, or anything else other than stage settings change, the API must be redeployed into an existing, or a new stage. Resource updates require redeploying the API, whereas configuration updates to not.
Stage
A stage is a named reference to a deployment. It is named "stage" because it was intended to be a logical reference to a lifecycle state of the REST API: 'dev', 'prod', 'beta', 'v2', as it evolves. As such, stages allow robust version control of the API. API stages are identified by API ID and stage name. The stage makes the API available for client applications to call. The API snapshot includes methods, integrations, models, mapping templates, Lambda authorizers, etc. An API must be deployed in a stage to be accessible to clients.
Various aspects of the deployment can be configured at the stage level:
- API caching
- Method throttling
- Integration with Web Application Firewall (WAF)
- Client certificates the API Gateway uses to authenticate against the integration endpoints in calls into.
- Logging
- X-Ray integration
- SDK generation
- Canary deployments
The stage also allows access to deployment history and documentation history.
Internally, an API stage is represented by a Stage resource.
Stage Name
The stage name is part of the API URL.
Current Deployment
The latest API Deployment for a stage is the current deployment for that stage. Creating a new deployment makes it the current deployment.
Stage Variable
Stage variables can be set in the Stage Editor and can be used in the API configuration to parameterize the integration of a request. Stage variables are also available in the $context object of the mapping templates.
Stage Tags
Stage Operations
Amazon Gateway URL
The URL is determined by a protocol (HTTP(S) or WSS), a hostname, a stage name and, for REST APIs, the resource path. This URL is also known as "Invoke URL".
https://<restapi-id>.execute-api.<region>.amazonaws.com/<stage-name>/<resource-path>
The URL for a specific deployment, exposed in a certain stage can be obtained by navigating to the stage in the console: API Gateway Console -> APIs -> <API Name> -> Stages -> <stage name>. The "Invoke URL" is displayed at the top of the "Stage Editor".
Amazon Gateway API Base URL
The hostname and the stage name determine the API's base URL:
https://<restapi-id>.execute-api.<region>.amazonaws.com/<stage-name>/
Amazon Gateway Hostname
https://<restapi-id>.execute-api.{region}.amazonaws.com
Base Path
See Custom Domain Names below.
Integration
The integration represents an interface between the API Gateway and a backend endpoint. A client uses an API to access backend features by sending method requests. The API Gateway translates the client request, if necessary, into a format acceptable to the back end, creating an integration request, and forwards the integration request to the backend endpoint. The backend returns an integration response to the API Gateway, which is in turn translated by the gateway into a method response and sent to the client, mapping, if necessary, the backend response data to a form acceptable to the client. The API developer can control the behavior of the API frontend interactions by providing (coding) method requests and responses and making them part of the API. The API developer can control the behavior of the API's backend interactions by setting up the integration requests and responses. These may involve data mapping between a method and its corresponding integration.
Integration and Swagger Operations
In case the API was generated based on Swagger metadata, An API Gateway "integration" can be logically thought as of a "wrapper" around a Swagger operation, that enhances the operation semantically, by specififying how the API Gateway handles the operation, for each operation.
Also see:
Integration Endpoints and Types
Amazon API Gateway can integrate three types of backend endpoints, and can also simulate a mock integration endpoint:
- HTTP(S) endpoints, representing client REST API services or web sites, can be integrated with HTTP proxy integration and HTTP custom integration. Specified as "http" or "http_proxy" in the configuration metadata. The difference between "http_proxy" and "http" is that in case of "http_proxy", the client request is passed to the backend as-is, as described in the Proxy Integration section below, while in the case of "http", the client request is modified, as described in the Custom Integration section.
- Lambda functions can be integrated with Lambda proxy integration and Lambda custom integration. The Lambda custom integration is a special case of AWS integration, where the integration endpoint corresponds to the function-invoking action of the Lambda service. Lambda integration is specified as "aws_proxy" in the configuration metadata.
- AWS service endpoints (Amazon Kinesis or Amazon S3) can only be integrated with non-proxy (custom) integration. Specified as "aws" in the configuration metadata.
- Mock backend endpoints, where API gateway serves as an integration endpoint itself. More in mock integration below. Specified as "mock" in the configuration metadata.
The integration type is defined by how the API Gateway passes data to and from the integration endpoint:
Proxy Integration
In general, proxy integration implies a simple integration setup with a single HTTP endpoint or Lambda function, where the client request is passed with minimal, or no processing at all, to the backend, as input, and the backend processing result is passed directly to the client. The request data that is passed through includes request headers, query string parameters, URL path variables and payload. This integration relies on direct interaction between the client and the integrated backend, with no intervention from the API Gateway. Because of that, the backend can evolve without requiring updates or reconfiguration of the integration point in the API Gateway. There is no need to set the integration request or integration response. More details needed here. This is the preferred integration type to call Lambda functions through the API Gateway.
Proxy integration should be preferred if there are no specific needs to transform the client request for the backend or transform the backend response data for the client.
Proxy Resource
I don't understand this, expand.
{proxy+}
Proxy resources handle requests to all sub-resources using a greedy path parameter: {proxy+}. Creating a proxy resource also creates a special HTTP method called ANY.
ANY verb
The ANY method supports all valid HTTP verbs and forwards requests to a single HTTP endpoint or Lambda integration.
API of a Single Method
Uses the {proxy+} proxy resource and the ANY verb. The method exposes the entire set of the publicly accessible HTTP resources and operations of a website. When the backend HTTP endpoint opens more resources, the client can use these resources with the same API setup.
http_proxy Integration
A 'http_proxy' integration simply sends the client requests, including path, headers and body, to the backend, and sends the integration response, including headers and body, to the client. It is not possible to interact with the response, such as add or overwrite response headers. Use 'http' custom integration for that.
HTTP Proxy Integration Example
Custom Integration
Custom integration implies a more elaborated setup procedure. For a custom integration, both the integration request and integration response must be configured, and necessary data mappings from the method request to the integration request and from the integration response to the method response must be put in place. Among other things, custom integration allows for reuse of configured mapping templates for multiple integration endpoints that have similar requirements of the input and output data formats. Since the setup is more involved, custom integration is recommended for more advanced application scenarios.
http Integration
Also see 'http_proxy' Integration.
Mock Integration
This integration type lets API Gateway return a response without sending the request further to the backend. A mock integration is a "loop-back" endpoint that does not invoke into any backend. Mock integration is useful for testing and enables collaborative development of an API, where a team can isolate their development effort by setting up simulations of the API components owned by other teams. This integration is specified as "mock" in the configuration metadata.
Configuring Integration Type
For details on how to configure a specific integration type, see below:
Private Integration
By private integration we understand an arrangement where the endpoints are deployed into a VPC and they are not publicly accessible. It is the API Gateway that exposes publicly the endpoint's resources, or a subset of them. For HTTP or HTTP proxy integration, private integration is configured by specifying a "connectionType" as VPC_LINK. The private integration can have open access or controlled access, which is implemented using IAM permissions, a Lambda authorizer or an Amazon Cognito user pool.
VPC Link
A VPC link enables access to a private resource available in a VPC, without requiring it to be apriori publicly accessible. It requires a VpcLink API Gateway resource to be created. The VpcLink encapsulates the connection between the API Gateway and the targeted VPC resource. The VpcLink may target one or more network load balancers of the VPC, which in turn should be configured to forward the requests to the final endpoint, running in the VPC. The network load balancer must be created in advance, as shown here:
The API method is then integrated with a private integration that uses the VpcLink. The private integration has an integration type of HTTP or HTTP_PROXY and has a connection type of VPC_LINK. The integration uses the connectionId property to identify the VpcLink used.
Target Load Balancer
A VPC Link must specify a list of network load balancer ARNs, belonging to the VPC targeted by the VPC link. The network load balancers must be owned by the same AWS account of the API owner. At least one target network load balancer ARN must be specified.
VPC Link Operations
Integration Request
The API Gateway translates, if necessary, a client method request into a format acceptable to the back end and creates an integration request. The integration request is forwarded to the backend endpoint. The integration request is part of the REST API's interface with the backend.
Integration Response
The backend returns an integration response to the API Gateway, which in turn translates. it into a method response and sends it to the client, mapping, if necessary, the backend response data to a form acceptable to the client. The integration response is part of the REST API's interface with the backend. Also see IntegrationResponse below.
API Gateway Resources
Amazon API Gateway Resources that Require Redeployment
Where are these resources living?
RestApi
A RestApi represents an Amazon API Gateway API.
Resource
A Resource contains methods.
Method
Method Throttling
Method throttling means limitation of the rate of requests sent into the API. The method throttling rate is configured at stage level. Typical default values are 10,000 requests per second with a burst of 5,000 requests.
RequestValidator
MethodResponse
Integration
The integration type can be specified programmatically by setting the type property of the Integration resource as such:
- AWS: exposes an AWS service action, including Lambda, as an integration endpoint, via custom integration.
- AWS_PROXY: exposes a Lambda function (but no other AWS service) as an integration endpoint, via proxy integration.
- HTTP: exposes a HTTP(S) endpoint as an integration endpoint, via custom integration.
- HTTP_PROXY: exposes a HTTP(S) endpoint as an integration endpoin, via proxy integration.
- MOCK: sets up a mock integration endpoint.
Also see above:
IntegrationResponse
Also see Integration Response above.
GatewayResponse
DocumentationPart
DocumentationVersion
Model
ApiKey
See API Key below.
Authorizer
VpcLink
Amazon API Gateway Resources that Require Configuration Changes without Redeployment
Account
Deployment (API Deployment)
See API Deployment above.
DomainName
BasePathMapping
Stage
See Stage above.
Usage
UsagePlan
Logging
Role Required for Logging
To enable API Gateway CloudWatch logs, create (or locate) an IAM role that allows API Gateway to write logs in CloudWatch. Elaborate here. Search for "Permissions for CloudWatch Logging" in https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-logging.html.
CloudWatch Execution Logging
CloudWatch API execution logs are a rendering of the API execution operations, which are the calls the API clients make against the API gateway. The name “execution” probably comes from the the "execute-api" component, which is the component the calls that are logged are invoked against. The execution logs are managed by the API Gateway, which performs CloudWatch log management, such as creating log groups and log streams, and the actual log generation, on behalf of the user. The execution logs can be enabled via the AWS Console by navigating to the stage we want logging enabled on, selecting the "Logs/Tracing" tab, CloudWatch Settings, then checking "Enable CloudWatch Logs", optionally "Log full request/responses data". The log level (ERROR or INFO) can also be specified at this stage. Execution logging can be enabled declaratively in a CloudFormation template, as shown here:
Once enabled, a new log group named API-Gateway-Execution-Logs_rest-api-id/stage-name is created and the execution logs can be retrieved from it. The log group contains multiple log streams. Why there are a large number of empty streams? The number seem to grow over time. Each non-empty log stream seems to correspond to an API invocation. The non-empty log streams can be separated by sorting by "Last Event Time": the non-empty streams will have a non-empty "Last Event Time". The logged data includes errors or execution traces (such as request or response parameter values or payloads), data used by Lambda authorizers, whether API keys are required, whether usage plans are enabled, and so on.
Note that once a log group or stream is manually deleted, the API requests won’t be logged anymore, unless the API is redeployed. Unfortunately, deleting the API or the CloudFormation stack that created the API does not seem to delete the log groups, so they need to be tracked and cleaned up manually.
Example of execution logging for "http_proxy" integration:
22:49:51 (f63e1b44-5274-11e9-b00c-23641f8d81b6) Extended Request Id: XU4aYEUmPHcFniQ=
22:49:51 (f63e1b44-5274-11e9-b00c-23641f8d81b6) Verifying Usage Plan for request: f63e1b44-5274-11e9-b00c-23641f8d81b6. API Key: API Stage: vm3sichhq6/v15
22:49:51 (f63e1b44-5274-11e9-b00c-23641f8d81b6) API Key authorized because method 'GET /inhabitants' does not require API Key. Request will not contribute to throttle or quota limits
22:49:51 (f63e1b44-5274-11e9-b00c-23641f8d81b6) Usage Plan check succeeded for API Key and API Stage vm3sichhq6/v15
22:49:51 (f63e1b44-5274-11e9-b00c-23641f8d81b6) Starting execution for request: f63e1b44-5274-11e9-b00c-23641f8d81b6
22:49:51 (f63e1b44-5274-11e9-b00c-23641f8d81b6) HTTP Method: GET, Resource Path: /inhabitants
22:49:51 (f63e1b44-5274-11e9-b00c-23641f8d81b6) Successfully completed execution
22:49:51 (f63e1b44-5274-11e9-b00c-23641f8d81b6) Method completed with status: 200
CloudWatch Access Logging
Access logs reflect who has accessed the API and how the caller accessed the API. Access logs are managed by the API developer or owner, who is required to create the log groups (or specify existing ones). Unlike execution logging, which can be configured independently on each operation, access logging is a stage-level configuration and applies to all API requests that go through that stage. Various details, such as $context variables, expressed in custom format, can be logged. To preserve the uniqueness of each log, the access log format must include $context.requestId. Note that $input variables are not supported. Access logs can be generated in a format supported by the analytic backend, such as CLF (Common Logging Format), JSON, XML and CSV. For more details on various log formats, see Set Up CloudWatch API Logging in API Gateway. Context variables can be used in the access log format. For a list of available context variables, see API Gateway Mapping Template Reference. This is an example how to configure access logging with CloudFormation:
Example:
CloudWatch Metrics
To Expand.
API Gateway can be configured to generate metrics and send them to CloudWatch. The metrics include:
- API Calls
- IntegrationLatency can be used to measure the responsiveness of the backend.
- Latency measures the overall responsiveness of the API
- CacheHitCount, CacheMissCount
- 400 and 500 errors.
CloudTrail Logging
API management operations can be logged with CloudTrail. The API management operations are REST API calls that an API developer makes against the API Gateway apigateway component. CloudTrail logging can be set at stage level, and they can be enabled in the AWS Console. To Expand.
Logging Operations
API Gateway Link Relations
API Documentation
See DocumentationPart and DocumentationVersion above.
X-Ray Integration
API request latency issues can be troubleshot by enabling AWS X-Ray. AWS X-Ray can be used to trance API requests and downstream services. X-Ray tracing can be configured at stage level, in the Stage Editor.
CORS
The API Gateway may be configured to handle CORS OPTIONS pre-flight invocation at the API level, with a mock integration. If CORS is enabled for a resource, API Gateway will respond to the pre-flight request instead of the backend, giving a small performance improvement. Mock integration for OPTIONS may be configured as shown below. The API Console also supports this configuration, generating with basic CORS configuration allowing all origins, all methods and several common headers.
paths:
/a:
options:
consumes:
- "application/json"
responses:
200:
description: "200 response"
headers:
Access-Control-Allow-Origin:
type: "string"
Access-Control-Allow-Methods:
type: "string"
Access-Control-Allow-Headers:
type: "string"
x-amazon-apigateway-integration:
responses:
default:
statusCode: "200"
responseParameters:
method.response.header.Access-Control-Allow-Methods: "'DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT'"
method.response.header.Access-Control-Allow-Headers: "'Content-Type,Authorization,X-Amz-Date,X-Api-Key,X-Amz-Security-Token'"
method.response.header.Access-Control-Allow-Origin: "'*'"
passthroughBehavior: "when_no_match"
requestTemplates:
application/json: "{\"statusCode\": 200}"
type: "mock"
In http_proxy integration mode, the CORS headers must be generated by the backend, there is no way of just "adding" them at the API Gatewa level. If CORS headers must be generated at the API Gateway level, "http" custom integration must be used.
Security
Creation, Configuration and Deployment
To create, configure and deploy an API in API Gateway, the IAM user doing it must have provisioned an IAM policy that includes access permissions for manipulating the API Gateway resources and link relations. The "AmazonAPIGatewayAdministrator" AWS-managed policy grants full access to create, configure and deploy an API in API Gateway:
arn:aws:iam::aws:policy/AmazonAPIGatewayAdministrator
Attaching the preceding policy to an IAM user allows ("Effect":"Allow") the user to act with any API Gateway actions ("Action":["apigateway:*"]) on any API Gateway resources (arn:aws:apigateway:*::/*) that are associated with the user's AWS account. To refine the permissions, see Amazon API Gateway Developer Guide Page 241 "Control Access to an API with IAM Permissions".
Access
The following policy grants full access on how an API is invoked:
arn:aws:iam::aws:policy/AmazonAPIGatewayInvokeFullAccess
To refine the permissions, see Amazon API Gateway Developer Guide Page 241 "Control Access to an API with IAM Permissions".
When an API Gateway API is set up with IAM roles and policies to control client access, the client must sign API requests with Signature Version 4. Alternatively, AWS CLI or one of the AWS SDKs can be used to transparently handle request signing. For more details, see Amazon API Gateway Developer Guide Page 462 "Invoking a REST API in Amazon API Gateway".
AWS Endpoint Authentication
When API Gateway is integrated with AWS Lambda or another AWS service, such as Amazon S3 or Amazon Kinesis, the API Gateway must be enabled as a trusted entity to invoke an AWS service in the backend. This is achieved by creating an IAM role and attaching a service-specific access policy to the role. Without specifying this trust relationship, API Gateway is denied the right to call the backend on behalf of the user, even when the user has been granted permissions to access the backend directly. More details in Amazon API Gateway Developer Guide page 525.
Signature Version 4
Per-Method Authorization
API Key
See ApiKey above.
Integration with Web Application Firewall (WAF)
An API deployment can be integrated with Web Application Firewall (WAF) by configuring its stage. Web ACLs can be created from the Stage Editor.
Client Certificates
API Gateway uses client certificates to authenticate against the integration endpoints in calls into. The client certificates are managed at stage level.
API Caching
API caching can be enabled for a specific deployment by configuring its stage
SDK Generation
SDK generation can be initiated at stage level, in the Stage Editor.
API Export
API Export can be initiated at stage level, in the Stage Editor. The API can be exported as:
- Swagger (OpenAPI 2.0) (including with API Gateway Extensions and Postman Extensions), as JSON or YAML.
- OpenAPI 3.0 (including with API Gateway Extensions and Postman Extensions), as JSON or YAML.
Canary Deployment
A Canary deployment is used to test new API deployments and/or changes to stage variables. A Canary can receive a percentage of requests going to the main stage. In addition, API deployments will be made to the Canary first before being able to be promoted to the entire stage.
Custom Domain Names
An API's base UR contains a randomly-generated API ID (not necessarily user friendly), "execute-api", the region, and the stage name. To make the base URL more user friendly, a custom domain name can be created and mapped on the base URL. Moreover, to allow for API evolution and support multiple API version within the context of a custom domain name, the API stage can be mapped onto a base path under the custom domain name, so the base URL of the REST API becomes:
https://custom-domain-name/base-path/
Mapping Template
For configuration, see:
$context
$input
TODO.