?

AWS 2020 re:Invent Recap

Our experts re:cap AWS re:Invent 2020.

February 1, 2021 20 minute read

AWS re:Invent 2020 ran in two parts: from November 30th to December 18th, and then part two ran from January 12th to the 14th. We had almost four weeks of AWS re:Invent content, and we've had a bit of time to digest the entirety of the conference. So we thought we would put down our thoughts!

We asked our internal cloud thought leaders, AWS architects and engineers to give us their impressions on the events and point out what excited them the most in the announcements. Let's take a look at the new services, exciting new features added and our thoughts on the nearly one month event.

Roger White, AWS Cloud Practice Lead & Widget Wrangler on AWS Config and AWS Network Firewall:

Simplify industry compliance management with AWS Config 

What is it?

Anyone who’s ever been in operations knows, that once you implement a “temporary” change in a production environment, it usually ends up permanent. As workloads are moved into the cloud, it is important to protect the environment from configuration sprawl and remain compliant. There is a need to have compliance in the cloud and AWS config addresses these configuration challenges. AWS config provides the ability to assess, audit and evaluate configurations by continuously monitoring and tracking configuration changes for up to seven years. 

AWS Config comes with preset rule templates out-the-box and an Aggregator dashboard, which is an overall view of config data from different accounts and regions, providing greater visibility into the status of an organization's compliance state. Two new features were recently introduced that enhance the capability of AWS Config. The first new feature is conformance packs, which are one or more rules and remediation options in a template, that follow operational best practices for compliance frameworks, and these can be customized, offering greater flexibility. The second new feature is advanced query, for reporting on historical metadata, current configuration, inventory of resources and cost optimization. There are sample queries and the ability to customize your own. 

As an example: a customer looking to follow some of best practices for Cybersecurity Maturity Model Certification (CMMC Level 1) can modify the conformance pack rules, thereby creating a subnet that pertains to their environment. View the information in the Aggregator dashboard to see if compliance details from the different accounts, and then perform an advanced query to get more granular operational data. 

With AWS config, customers can feel confident that they have the flexibility to use existing rules, customizable rules, remediation options and added visibility to ensure that operations are compliant with an organizations desired configuration boundary. 

Use cases 

  • Asset Discovery
  • Continuous audit and compliance

Customer benefits 

  • Customer visibility, reporting and querying
  • Reduction in faults in the cloud

Introducing AWS Network Firewall

One of the most exciting announcements at re:Invent is the network firewall, a highly available and scalable managed firewall and intrusion prevention service for your VPC. It has 99.99% availability, which beats most customer’s on-premises SLA solutions and scales up automatically to support the environment. Inspect L3-L7 traffic from VPC-to-VPC, outbound to the Internet, inbound from Internet and from AWS direct connect and VPN, with the ability to selectively choose which flows you want to inspect. It can also inspect traffic, based on behavior and detect protocol anomalies, while defending against connection network threats with intrusion prevention capabilities. 

AWS Network Firewall

Quickly come into compliance with flexible options that allow an organization to scale out from one-to-many accounts or centralize using the Transit Gateway. The Network Firewall is highly customizable with fine grain controls, based on ip/port/protocol, FQDN and general pattern matching and can be managed centrally from the Firewall manager. Lastly, it integrates with the existing security tools from 14 partners, with more on the way. All these features make this Network Firewall managed service a good choice for cloud environments looking to have their teams focus on the things that matter most, policy, compliance and logs. 

Jeff Weeks, AWS Principal Cloud Consultant on AWS Professional Services and Babelfish:

Professional Services in AWS Marketplace

The re:Invent announcement that SaaS and third-party professional services can now be purchased through the AWS marketplace, under an existing AWS account, could be a huge benefit to enterprise customers. The AWS marketplace will offer the advantage of streamlined vendor onboarding and standardized licensing, while still providing flexible pricing, delivery terms and payment schedules. For most enterprise customers this could avoid months of delays onboarding new vendors and legal negotiations over service contracts and purchasing terms. For any company who provides AWS professional services, this opens a new, cost effective marketing channel directly to new customers and easy add-on sales for existing customers. 

Amazon Babelfish for Aurora PostgreSQL

The migration of proprietary legacy databases, MS SQL and Oracle, is one of the single largest obstacles and risks an enterprise faces when adopting the cloud. The Re:Invent preview release of Babelfish, a new translation layer for Amazon Aurora, will enable Aurora to understand applications written for natively for Microsoft SQL server. This together with the Amazon Database migration service will allow an enterprise to reliably and cost effectively (as low as $3 dollars per terabyte) migrate their data from proprietary legacy databases into Amazon Aurora, while moving their applications unchanged to the cloud using Babelfish to translate their native calls to Amazon Aurora. While still in preview, this has the potential to free an enterprise from legacy providers contracts, reduce licensing costs and reduce operational costs all without the risk of refactoring their applications during migration.

David Harrison, AWS Cloud Platform Architect and network guru on AWS Transit Gateway Connect:

SD-WAN to the cloud just became a reality

AWS Transit Gateway Connect allows you to natively connect your software-defined wide area network (SD-WAN) infrastructure with AWS. SD-WANs are leveraged to connect data centers and branch offices over the public Internet. Up to this point in time, extending SD-WAN into the AWS cloud was logistically difficult to accomplish as there was no native construct that made this easy or efficient: 

  • SD-WAN appliances in the cloud had limited bandwidth to the rest of the infrastructure in AWS.
  • Very limited routing capability between the SD-WAN appliances and AWS.
Transit Gateway Connect

Transit Gateway Connect eliminates these restrictions of bandwidth and comes with extensive routing functionality. Most customers have chosen to terminate their SD-WAN hubs in carrier neutral facilities (CNF) like Equinix, which is close to the cloud. Now they have another real option for connecting their SD-WAN networks to the AWS cloud. Transit Gateway Connect supports generic routing encapsulation (GRE) for higher bandwidth performance and uses Border Gateway Protocol (BGP) for dynamic routing.

Transit Gateway Connect can also be used as a third-party branch or customer gateway appliance running in an on-premises network that uses AWS Direct Connect as transport. With this new Connect attachment type, GRE packets from third-party appliances are routed based on the associated route table of the Connect attachment. A GRE tunnel is established between Transit Gateway and the third-party appliance. You can then establish BGP peering with the Transit Gateway within the GRE tunnel to securely exchange routing information. 

To determine if this is the right solution for you, consider testing your SD-WAN solution using Transit Gateway Connect in the WWT ATC lab!

Sam Shouse, AWS Senior Cloud Platform Architect and Sith acolyte on Amazon HealthLake:

Amazon HealthLake is a tremendous solution for anyone that needs to store, transform, query and analyze data in the healthcare industry. Healthcare data is both structured and unstructured that ranges from insurance claims to medical images. Being able to correlate all of that data while meeting the stringent HIPAA requirements is extremely challenging. Amazon HealthLake is a HIPAA-eligible service will make it possible for organizations in the healthcare industry to store, transform, query and analyze health data at petabyte scale. This will enable customers to move faster by accelerating the complex problem of preparing healthcare data for advanced analytics and machine learning models that can be used to improve healthcare in so many ways. Amazon HealthLake is solution that can have a monumental impact on the healthcare industry and its patients!

James Brown, AWS Cloud Platform Engineer and Godfather of Soul on AWS Audit Manager:

Although AWS has services that allow for alerting and remediation of security configuration within an AWS environment with AWS Security Hub and Config, I believe AWS has gone to the next level with Audit Manager. AWS Audit Manger helps simplify the audit process for an array of security compliance standards (CIS, HIPPA, etc.) and offer a one-stop-shop tool to scans and assess your AWS environment. Gathering artifacts and reviewing compliance documentation can be a long and arduous task during scheduled organization/government audits. However, this service generate report that can be offered as artifacts to prove compliance in an organization’s or government agency audit process — greatly shorting this effort.

Sean Doyle, AWS Practice Lead, APN Ambassador and elite fashion model on AWS Lambda Container Image Support and SageMaker Data Wrangler:

AWS re:Invent this year was filled with many exciting announcements. From multiple announcements surrounding hybrid services, DevOps, compute and data/AI, this was an exciting year to be working in the AWS cloud. There were so many great announcements, but the two that really caught my eye are the AWS Lambda Container Image support and the SageMaker Data Wrangler.

AWS Lambda Container Image support allows developers to deploy their code to Lambda as a container image up to 10GB in size. This is a huge benefit for larger workloads that rely on dependencies such as machine learning or data engineering workloads. AWS has created base images to build these container images from, and also offer a way to leverage custom containers through the use of the Lambda Runtime API. In addition, AWS is releasing the Lambda Runtime Interface Emulator to test your container locally to ensure it will run when deployed to Lambda.  Lambda functions built using container images can be pushed to ECR and then deployed to AWS Lambda. These Container Images will support existing AWS Lambda Layers as well with some modifications. Alternatively, if AWS Lambda Layers were used previously to include common libraries for Lambda functions, these can be packaged within the container image itself reducing complexity.

The other announcement that I am excited about is Amazon SageMaker DataWrangler. SageMaker DataWrangler makes it easy for data scientists to import, aggregate and prepare data for machine learning models. It can work end to end in a data pre-processing workflow, from preparation, selection, cleansing, exploration, visualization and feature engineering. The offering has over 300 built-in data transformations available, and can be leveraged to build ML workflows with another new service, Amazon SageMaker Pipelines. Data preparation is one of the longest steps in the machine learning workflow and DataWrangler aims to shorten this stage of the lifecycle significantly.

In addition to Amazon SageMaker DataWrangler, there is also an available Python library available from awslabs called DataWrangler. This offers similar benefits in that it assists the data scientist and data engineering teams speed up their workflow by making it easy to work with multiple AWS services. For example, with the Data Wrangler library, I can load a dataset into a Pandas dataframe, and then with one method I can create a table in AWS Glue, upload the new dataset to S3 and transform the dataset to a new format. In addition, I can use the SDK to easily query Athena from this new Glue Data Catalog and more.

David Ball, Cloud Platform Architect, APN Ambassador and professional juggler on AWS Outposts:

An AWS Outpost allows an organization to extend AWS services like Amazon EC2, ECS, EKS and RDS into their local data center to support a cloud-based application deployment and operating model using AWS tools and APIs for local applications that require low latency connections or local data processing. Later in 2021, AWS Outposts will soon be available in 1U and 2U server form factors, meaning a customer will not have to order an entire 42U rack to enjoy the benefits of an AWS Outpost. 

What will be compelling and exciting to see is how these small form factor Outposts drive new architectures and enable additional AWS services. Today, the 42U Outpost is an extension of a single AWS Availability Zone. Thus, to support a local multi-AZ deployment model with Outposts, you'd need to deploy (2) 42U Outposts racks, which 1) wouldn't be cheap and 2) your local data center or manufacturing facility may not have the physical space to support multiple 42U Outposts racks. In these cases, 1U and 2U Outposts will become very attractive. In terms of service introduction, it seems that the smaller Outposts should make it easier for AWS to enable services requiring multiple AZs. Personally, I'd love to see the small form factor Outposts become the mechanism by which AWS can enable services such as Amazon WorkSpaces.

If you're an AWS re:Invent veteran, you've likely seen slides during Andy Jassy's keynotes indicating that we are "still in the early days of cloud" as on-premises spend still accounts for over 90+% of the IT budget. By expanding the AWS Outposts offering, AWS has signaled their intent to be a player in that market and, over the course of the next year and beyond, it will be interesting to watch how AWS Outposts enables organizations to drive cloud-like innovation within the local data center. 

Joe Esianor, AWS Solutions Architect and aspiring session bassist on AWS Gateway Load Balancer:

In my opinion the GWLB announcement along with the additions to Transit Gateway were the most impacting. We can throw in EC2 C6gn Instances – 100 Gbps Networking with AWS Graviton2 Processors. The reason is simple to me, these are going to help build the infrastructure foundation upon which all else will be built and secondly help expand the migration offering for AWS.

Over the years Amazon provided 3 options for Load balancers, classic (now deprecated), Application load balancer and network load balancer. Gateway Load Balancer (GWLB) is the new addition has been made to the load balancer offering. This announcement is significant because it will resolve asymmetric ingress routing issue for traffic source outside of a VPC. Currently there are novel solutions to offset this limitation which in turn complicates the firewall solution.

GWLB installs an endpoint in each AZ that will forward ingress and egress traffic to destination or enforcement appliance using Geneve encapsulation. Forwarding is based on rules created for the respective target groups. GWLB endpoints have several advantages two of which is the support for jumbo mtu and tunneling capability that simplifies the traffic path. GWLB greatly improves the security design and options for TG implementation.

Brandon Hunter, AWS Solutions Architect and Betty White impersonator on VPC Reachability Analyzer:

How much time have you and/or members of your cloud administration teams expended troubleshooting network connectivity matters in AWS? Anyone familiar with the process can assure you that spending any less that just a few minutes to troubleshoot any single network reachability problem would be a rare and welcome occasion. Most would probably agree that it’s not uncommon to spend anywhere from 15-20 minutes on up to several hours working on even the simplest of network reachability and connectivity problems particularly across rather large and complex cloud networking real estate. As our networks continue to grow in terms of size, density and complexity, our work will only become more difficult and risks of inadvertent changes that much more likely.

 As cloud network administrators, we all get the same typical questions:

  • “Why can’t I open this webpage?”
  • “Can these two servers talk to one another?”
  • “Can you confirm this port is opened in the firewall?”

We’ve all heard them before — the often urgent and desperate pleas of some poor soul once again at a work stoppage due to a presumed network connectivity problem: real or imagined. The usual sequence events?

  1. Users log an incident response ticket with the Service Desk in hopes of an quick fix (though usually not holding their breath).
  2. The helpdesk investigates a bit only to realize they can’t help and require escalation to a cloud networking specialist for assistance.
  3. Cloud network gal finally finds some cycles to sync with the user while hoping to gain more meaningful diagnostic input to further isolate the issue.
  4. She tries pings and traceroutes from multiple hosts and other networks, VPCs and accounts.
  5. She pours through configurations for routing, firewalling and load balancing elements in addition to VPC Flow and other platform logs.
  6. She walks the network end-to-end while trying to validate whether this traffic is going through, and if not then why?

 If we’re lucky, a few hours later the matter has been identified and resolved.

 “Phew.. huge build up… of course there is an easier way right?”…  Right! Enter the VPC Reachability Analyzer!

As the name suggests, it exists to help solve cloud network reachability questions quickly and easily. It accepts a 4-tuple input consisting of IPv4 source and destination endpoint information (IP addresses optional), in addition to destination port and transport protocol to check for (TCP or UDP). Within a few minutes (usually a few seconds), the service will return an analysis object detailing the network reachability from a source to a destination resource to include L3 routing and L4 port connectivity end-to-end. 

It accomplishes this by applying Automated Reasoning to their formal specification of resource configuration (VPC route tables, security groups, network ACL’s, etc) in order to compute and “formally” (read: mathematically) prove or disprove feasible reachability from any source to any destination within AWS VPCs (a process known as Formal Verification). In addition, traffic to/from/through VPN Gateways, Internet Gateways, Transit Gateways, VPC Endpoints and VPC Peering Connections may also be specified as source or destination hosts (or forwarders).

As an aside, the concepts of Automated Reasoning are also leveraged to power their AWS Inspector service which provides a very powerful formal verification of one network and access security posture. More on both the VPC Reachability Analyzer an links to more about AWS’ implementations of Automated Reasoning can be found in this AWS blog post.

Bill Johns, AWS Cloud Practice Lead and antique duck call collector on Amazon Lookout for Vision:

AWS Vision was released in 2016 and has grown in capability over the years. The most recent release is AWS Lookout for Vision. This enables builders to automatically find visual defects in manufactured products, accurately and at scale. Lookout for Vision uses computer vision to identify damaged or missing components or structures. This solution is flexible enough to address entire production lines all the way down to finding defects in silicon wafers.

This capability is not supported across all regions, yet. It is available in us-east-2, us-west-2, us-east-1, eu-west-1, eu-central-1, ap-northeast-1, ap-northeast-2. When this service is combined with AWS Outposts and other AWS Industrial applications, manufactures can leverage machine learning to improve quality while lowering costs.  

Chris Williams, Multicloud Consultant, AWS Hero , chronic overachiever and failed stand-up comedian on… basically all of week one:

The big event was the Andy Jassy keynote . Subsequent sessions have been going deeper into the details of said announcements. Here is a list of the announcements (along with some commentary from my perspective):

  • AWS Trainium
    • Machine learning is getting cheaper/better/faster all the time. If you are not looking into use cases for AI/ML to apply to your business, you are missing the boat.
  • Amazon ECS Anywhere
  • Amazon EKS Anywhere
  • EKS open source
    • A tacit acknowledgement that other computers exist on the planet that DO NOT belong to AWS. This will make multicloud efforts easier, even though Jassy still did not say the word “multicloud.”
  • Lambda per 1 millisecond billing
    • Making the cheapest way to run workloads in the cloud even cheaper.
  • Lambda container support
    • Deploy & package Lambda functions as container images.
    • Up to 10GB in size.
  • AWS Proton
    • Managed application deployment service for container & serverless apps.
    • DevOps tool for monitoring deployments & providing design templates to the dev teams.
  • gp3 Volumes for EBS
    • 20% lower price point per GB.
    • Scale IOPS and throughput without adding capacity.
    • No more wasted disk space to get the IO you need from your EBS volumes.
  • io2 Block Express
    • “SAN for the cloud”
    • 256K IOPS & 4000 MBps throughput.
    • Max volume size 64 TB.
    • Throughput scales 0.256 MB/s per provisioned IOPS, up to a max of 4000 MBps per vol.
    • No more striping volumes for high IO workloads!
  • Amazon Aurora Serverless V2
    • Scale in fraction of seconds.
    • 90% cost savings compared to provisioning for peak capacity (yeah… but isn’t that what we are NOT supposed to do?).
  • AWS Glue Elastic Views
    • Combine and collate data from different data stores.
    • Copies data from data store(s) to a target store.
    • Use standard SQL to create your virtual table.
    • Will play a key role in data analytics.
  • SageMaker Data Wrangler
    • Simplifies the process of data prep from weeks to minutes.
    • Including data selection, cleansing, exploration & visualization.
    • Single visual interface.
  • SageMaker Feature Store
    • Managed repo to store ML features.
    • Tracks the metadata of the stored features for query purposes.
  • SageMaker Pipeline
    • Purpose-built CI/CD service for ML.
    • Making that SkyNet future a certainty, one day at a time!
  • Amazon DevOps Guru
    • ML powered service to help improve an app's operational performance & availability.
    • Automatically detect operational issues and recommend actions.
    • This looks very cool; I am excited to learn more about this.
  • Amazon Quicksight Q
    • Uses NLP to answer your business questions “instantly.”
  • Amazon Monitron uses ML to:
    • End-to-end service including sensors and ML service.
    • Detect abnormal machine behavior early.
    • Implement predictive maintenance.
    • Reduce unplanned downtime.
  • Amazon Lookout for Equipment
    • For customers with existing equipment sensors.
    • Uses data from your sensors to detect abnormal equipment behavior.
    • Sends sensor data to AWS to build ML model.
  • AWS Panorama Appliance
    • Add computer vision to your existing onsite IP cameras.
    • Enhances your existing onsite cameras, which is pretty cool. Appliance means the site doesn’t need a constant/reliable internet connection, which is even cooler!
  • AWS Outpost
    • 2 New Sizes – 1U and 2U
    • I am currently applying for getting a 1U Outpost installed in my house.

 

As our teams dive into these new and exciting services over the coming year, be on the lookout for new:

Share this