Logicata AI Bot
Logicata AI Bot

March 19, 2025

The Logicata AI Bot automatically transcribes our weekly LogiCast AWS News Podcasts and summarises them into informative blog posts using AWS Elemental MediaConvert, Amazon Transcribe and Amazon Bedrock, co-ordinated by AWS Step Functions.

In this week’s LogiCast AWS News podcast, host Karl Robinson and co-host Jon Goodall of Logicata were joined by special guest Joe Stech, an AWS Community Builder and Solutions Architect at Arm. The trio discussed recent AWS news and developments, sharing their insights and experiences.

Amazon EC2 Allowed AMIs Integration with AWS Config

The first topic of discussion was the recent announcement about Amazon EC2 Allowed AMIs now integrating with AWS Config. Jon explained that this feature is particularly useful for regulated and secure environments where organizations need to use hardened images or approved AMIs.

Previously, custom scripts were required to monitor the impact of enabling Allowed AMIs. Now, with the AWS Config integration, it’s easier to track and manage these AMIs. Jon noted, “This is one of the long list of things of ‘why didn’t they just do that for me?’ And now they’ve done it for you.”

Joe added that this integration seems like a natural progression, stating, “This seems like one of those things that if you don’t use AWS Config all the time, you would be surprised that the feature didn’t already exist.”

Demystifying Amazon DynamoDB On-Demand Capacity Mode

The conversation then shifted to an article from the AWS Database Blog about demystifying Amazon DynamoDB on-demand capacity mode. The article aimed to dispel 11 myths about DynamoDB, categorized into cost, performance and scaling, operational, and implementation misconceptions.

Jon expressed skepticism about some of the “myths” presented in the article, stating, “I didn’t think most of these were myths… For a few of them, genuinely, I thought, who thinks that?”

The group discussed various misconceptions, including:

  1. Cost myths:
    • Per-request price for on-demand is more expensive than provisioned
    • It’s always cheaper to use provisioned mode with auto-scaling
    • On-demand charges for unused capacity
  2. Performance and scaling myths:
    • On-demand tables have slower response times
    • Tables can’t go higher than 40,000 read and write capacity units
    • You can’t scale beyond twice your previous peak
  3. Operational myths:
    • You can’t control on-demand consumption
    • On-demand tables need to be pre-warmed
    • Changing table mode requires downtime
  4. Implementation misconceptions:
    • On-demand tables will fix throttling issues
    • On-demand is only for spiky workloads

Joe emphasized the importance of understanding one’s own workload, stating, “Understand your own workload and don’t think that some serverless service is going to take care of everything for you. You still have to understand how things work.”

Long-Term Backup Options for Amazon RDS and Amazon Aurora

The discussion then moved to an article about long-term backup options for Amazon RDS and Amazon Aurora. The hosts and guest shared their experiences and thoughts on long-term backups.

Joe mentioned that for his use cases, a combination of pg_dump and pg_restore for long-term backups, along with regular RDS snapshots for day-to-day backups, has been sufficient. He added, “I could see a lot of scenarios where these long-term backup options will be very useful.”

Jon discussed various backup options mentioned in the article, including:

  1. Manual snapshots
  2. AWS Database Migration Service (DMS)
  3. Exporting snapshots to S3
  4. Using native database tools (pg_dump, mysqldump, etc.)

Jon also mentioned his preference for AWS Backup due to its set-and-forget approach, although he noted it wasn’t mentioned in the article.

Karl emphasized the importance of not just setting and forgetting backups, but also monitoring and testing them regularly.

The group agreed that long-term backups are most often used for compliance and audit purposes rather than for recovering from issues.

DeepSeek R1 Model Now Available as Fully Managed Serverless Model in Amazon Bedrock

The podcast then covered the announcement of the DeepSeek R1 model becoming available as a fully managed serverless model in Amazon Bedrock. Jon explained the key difference from the previous release: “You don’t have to run it yourself. That’s the difference.”

He elaborated that the model is now serverless and priced per token, similar to other managed models in Bedrock. This change eliminates the need for users to manage the underlying infrastructure and security.

Joe, while not having used DeepSeek personally, suggested potential use cases such as data extraction from open-ended documents. He also mentioned his preference for more cutting-edge models like Claude 3.7 for coding snippets and general queries.

The conversation briefly touched on potential concerns about the Chinese connection of DeepSeek, given its origin. However, the group agreed that Amazon Bedrock’s guardrails likely mitigate any significant risks.

Misconfigured AWS S3 Bucket Exposes US Nurses’ Data

The final topic discussed was a recent data leak involving a misconfigured AWS S3 bucket that exposed sensitive data of US nurses. The leak included 86,000 records containing profile photos, social security cards, driver’s licenses, professional certificates, prescription records, and disability insurance claims.

Joe suggested that such leaks might become more common with the rise of AI-assisted coding, especially among less technical users. He stated, “My guess is these types of things are gonna be seen more again, even though they have been seen less in recent years.”

Jon pointed out that AWS has made significant efforts to make it harder to accidentally make S3 buckets public, requiring multiple steps and warnings before allowing public access. However, he agreed that the increase in AI-generated code by less experienced developers could lead to more security oversights.

The hosts concluded by discussing the irony of AWS’s strict policies on public buckets, except when it comes to listing products on the AWS Marketplace, where a public bucket is required for storing product logos.

In closing, Karl Robinson thanked the participants and encouraged listeners to subscribe to the podcast on various platforms and the LogiCast YouTube channel for video content.

This is an AI generated piece of content, based on the Logicast Podcast Season 4 Episode 11.