AWS re:Invent announcements that actually matter for a three-person fintech team
Aurora DSQL, S3 Tables, Amazon Nova. re:Invent 2024 dropped a firehose of announcements, but most solve problems we will not have for years. Three that actually changed our roadmap.
AWS re:Invent 2024 ran from December 2 through 6 and dropped over 80 new service announcements. My Twitter feed exploded with takes. Every announcement got its own blog post from AWS, its own hot take from analysts, and its own thread from developers speculating about use cases. I spent a weekend filtering the noise and came out with exactly three announcements that matter for a three-person fintech team running a Next.js application on Kubernetes.
This is where the homelab stopped being a hobby and started acting like a leadership tool. It also builds on what I learned earlier in “Setting up k3s on a mini PC for staging and why every startup needs a junk drawer environment.” The infrastructure and ctrlpane work gave me a cheap place to pressure-test release habits, GitOps discipline, and failure modes before I asked the team to trust those defaults at work.
What I Ignored and Why
Before the three that matter, let me explain what I deliberately ignored and why. This is not because these are bad services. It is because they solve problems we do not have at our scale.
- Aurora DSQL: A distributed SQL database with multi-region active-active writes. We have one region and one database. When we need multi-region, we will evaluate it. Until then, it is complexity with no benefit.
- Amazon Q Developer Transform: AI-powered code transformation for Java application upgrades. We do not write Java.
- SageMaker HyperPod: Managed infrastructure for training large language models. We use AI, we do not train it.
- Amazon Bedrock Marketplace: A marketplace for foundation models. We use Claude through the API directly. Adding a marketplace layer adds indirection we do not need.
The filter I apply to every announcement is simple. Does this solve a problem we have today, or a problem we will concretely have within six months? If the answer is neither, I note it and move on. Chasing future problems with current engineering bandwidth is how small teams fall behind on the work that actually matters.
The Three That Changed Our Roadmap
S3 Tables with Apache Iceberg support. We store financial reconciliation data in PostgreSQL, which is fine for operational workloads but painful for analytical queries over millions of rows spanning months. S3 Tables gives us an Iceberg-managed table format on S3 with query access through Athena. This means we can offload historical analytical workloads to S3 without building a separate data warehouse. For a three-person team that cannot operate a Snowflake or Redshift instance, this is the difference between building analytics and not building analytics.
Amazon Nova foundation models. The Nova family includes Micro, Lite, and Pro tiers at price points that make AI features economically viable for internal tooling. We have been avoiding AI-powered categorization of transactions because the API costs at GPT-4 pricing did not justify the accuracy improvement over rule-based matching. Nova Lite at a fraction of the cost changes that math. We are prototyping an AI-assisted transaction categorizer for Q2.
CloudWatch Application Signals. Application-level observability with automatic service maps and SLO tracking, integrated with our existing CloudWatch setup. We currently run our own Grafana stack for observability. Application Signals does not replace it, but the automatic service dependency mapping is something we have never had and would take weeks to build manually. We are evaluating it as a supplement to our Grafana dashboards for service-level health.
The Evaluation Framework
Every re:Invent cycle produces the same pattern. Teams get excited about announcements, add three new services to their architecture, and spend the next quarter integrating things they did not need. I have seen this at every company I have worked at. The antidote is a strict evaluation framework.
- Does this solve a problem we have documented in our backlog right now?
- Can we adopt it without adding a new operational dependency to our infrastructure?
- Does the pricing model work at our current scale, not at the scale we hope to reach?
- Can one engineer set it up and maintain it, or does it require dedicated platform expertise?
- If we adopt it and it does not work out, what is the cost to revert?
If a service passes all five questions, it goes on the evaluation roadmap. If it fails any of them, it goes on a watch list for re-evaluation in six months. S3 Tables, Nova, and Application Signals all passed. Aurora DSQL, Bedrock Marketplace, and the other 75 announcements went on the watch list.
The Real re:Invent Takeaway
Operator mode means you inherit every downstream consequence. The code path is only half the story; the other half is how the decision warps planning, trust, and execution speed. I kept relearning that lesson while building ftryos and pipeline-sdk.
The most important skill at re:Invent is not identifying what is new. It is identifying what is new and relevant to your specific situation while ignoring everything else. For a three-person team, every new service you adopt is a service you maintain. Choose accordingly.
I spent one weekend evaluating 80 announcements and came out with three actions. That ratio, 80 announcements to 3 actions, is about right for a small team. The teams that get hurt by re:Invent are the ones that come back with 15 actions and spend Q1 chasing new services instead of shipping product. Your re:Invent takeaway list should fit on an index card. If it does not, you are optimizing for novelty, not for your business.
The value of re:Invent for a small team is not the keynote announcements. It is the hallway conversations with engineers who solved the same scaling problems two years ago. Every useful decision we made in the six months after the conference traced back to a ten-minute conversation at a booth or a breakout session Q&A. Conferences scale best when you arrive with specific questions and leave with specific answers, not when you chase every new service announcement.