@stilkov
Software Architecture, Processes, Organization — and Humans
In this talk, we’ll look at software architecture, team organization and the interplay between technology and humans. We’ll address some of the patterns and anti-patterns that can be observed when organizations try to evolve towards more decentralization and autonomy, and take a look at some strategies to ensure technology supports and enable the desired outcome. We’ll also mention data mesh as it’s 2023 and we have to.
@stmcallister
Developer Advocate at ngrok
Scott McAllister is a Developer Advocate for ngrok. He has been building software in several industries for over a decade. Now he's helping others learn about a wide range of web technologies and incident management principles. When he's not coding, writing or speaking he enjoys long walks with his wife, skipping rocks with his kids, and is happy whenever Real Salt Lake, Seattle Sounders FC, Manchester City, St. Louis Cardinals, Seattle Mariners, Chicago Bulls, Seattle Storm, Seattle Seahawks, OL Reign FC, St. Louis Blues, Seattle Kraken, Barcelona, Fiorentina, Borussia Dortmund or Mainz 05 can manage a win.
As Malaga CF hardcore supporters, the organisation of JOTB condemns cheering in favour of Robussia Dortmund.
Building Ingress - From Concept to Connection
At first glance, ingress is an easy concept: you route traffic from the wider world into your cluster. As you layer on SSL and load balancing, the principles stay the same and everything works with minimal thought and effort. But as your infrastructure grows, your clusters grow, the interactions get more complex, and your security requirements explode. In this session, I’ll walk you through how we designed and built an Ingress Controller and have converted our clusters to use it in production to support millions of requests. It wasn’t easy but running it as an open source effort from the start encouraged our team and customers to review, explore, and consider situations outside our original plans.
@Xiaoman_Dong
Software Engineer at StarTree
Xiaoman Dong has devoted his past 10+ years working in the streaming analytics and database domain, building data infrastructure, scalable distributed systems, and low latency queries over large datasets. During his work in StarTree, and Uber, he has designed, led, built, and operated several large-scale business-critical solutions based on open-source software like Apache Kafka, Apache Pinot, Apache Flink, and Kubernetes. While working in Stripe, he has also built and run the world’s largest single Pinot cluster with around 1 trillion rows and 1 PB in size.
Xiaoman is also an advocate of Big Data and distributed OLAP systems. He has been actively speaking at large tech conferences Kafka Summit, Flink Forward, and ApacheCon in recent years.
Kubernetes Clusters At Scale: Managing Hundreds Apache Pinot Kubernetes Clusters Inside Each End User’s Own Cloud Infrastructure
How to efficiently build and manage hundreds of Kubernetes Clusters that serve modern online analytics databases, for different customers? To add to the challenge, what if customers need to run their own clusters inside their own private clouds? We are sharing our system design that solves it.
How to provide fully managed online analytics databases like Pinot to hundreds of customers, while those Pinot clusters are running in each customer’s own private virtual cloud? The answer is by combining the power of Kubernetes with our automated scalable architecture that can fully manage a fleet of Kubernetes clusters.
When companies consider using SaaS (Software as a Service) products, they are often held back by challenges like security considerations and storage compliance regulations. Those concerns often require that the data stays within the same virtual cloud owned by the company. And it makes managed solutions very hard for companies to implement.
In StarTree we have built a modern data infrastructure based on Kubernetes so companies can keep their data inside their own infrastructure, and at the same time get the benefits of using a fully managed Apache Pinot cluster deployed in the customer’s cloud environment.
We have designed a scalable system based on Kubernetes that enables remote creation, maintenance, and monitoring of hundreds of Kubernetes clusters from different companies. This enabled us to scale quickly from a handful of deployments to over 100+ Pinot clusters in a short time span with just 10+ engineers.
@AlexJonesax
Kubernetes Engineering Director at Canonical
Alex works as both a contributor and end-user of cloud-native technology.
When not working as Kubernetes Engineering Director at Canonical; he contributes to CNCF TAG App Delivery as Tech Lead and to the Open Feature project on the governing board.
Passionate about mentoring, collaboration and cloud-native architecture; he thrives on working together to solve problems and communicating those learnings to others. Speaking where possible and mentoring others to tell their story is a large part of the enjoyment of his professional life.
Rust Operators For Kubernetes: A Glimpse At The Future Foundation Of Cloud Native
In this talk, we will explore the benefits of using the Rust programming language for building custom Kubernetes operators. Rust's focus on safety, performance, and reliability makes it an ideal language for developing robust, scalable, and efficient software solutions. We will delve into the unique features of Rust that make it an attractive choice for building Kubernetes operators, including its memory safety guarantees, zero-cost abstractions, and built-in concurrency support. We will also discuss some of the challenges of using Rust in the Kubernetes ecosystem, such as interfacing with other programming languages and managing dependencies. Finally, we will demonstrate how to build a simple Kubernetes operator using Rust and explore best practices for deploying and managing it in a production environment. Attendees will come away from this talk with a deeper understanding of the advantages and trade-offs of using Rust for Kubernetes development and practical tips for getting started with building their own Rust-based operators.
@marlene_zw
Developer Advocate at Voltron Data
Marlene is a Zimbabwean software engineer, developer advocate and explorer. She is a previous director and vice-chair for the Python Software Foundation and is currently serving as the vice-chair of the Association for Computing Machinery practitioner board. In 2017, she co-founded ZimboPy, a non-profit organization that gives Zimbabwean young women access to resources in the field of technology. She is also the previous chair of PyCon Africa and is an advocate for women in tech on the continent. Professionally, Marlene is currently working as a Developer Advocate at Voltron Data.
Elephants, ibises and a more Pythonic way to work with databases
In this talk, I will be sharing about Ibis, a software package that provides a more Pythonic way of interacting with multiple database engines. In my own adventures living in Zimbabwe, I’ve always encountered ibises (the bird versions) perched on top of elephants. If you’ve never seen an elephant in real life I can confirm that they are huge, complex creatures. The image of a small bird sitting on top of a large elephant serves as a metaphor for how ibis (the package) provides a less complex, more performant way for Pythonistas to interact with multiple big data engines.
I'll use the metaphor of elephants and ibises to show how this package can make a data workflow more Pythonic. The Zen of Python lets us know that simple is better than complex. The bigger and more complex your data, the more of an argument there is to use Ibis. Raw SQL can be quite difficult to maintain when your queries are very complex. For Python programmers, Ibis offers a way to write SQL in Python that allows for unit-testing, composability, and abstraction over specific query engines (e.g.BigQuery)! You can carry out joins, filters, and other operations on your data in a familiar, Pandas-like syntax. Overall, using Ibis simplifies your workflows, makes you more productive, and keeps your code readable
@supercoco9
Developer Advocate at Quest DB
As a Developer Advocate at QuestDB, Javier helps developers make the most of their (fast) data, He makes sure the core team behind QuestDB listens to absolutely every piece of feedback he gets, and he facilitates collaboration in their open-source repository.
Javier loves data storage, big and small. He has extensive experience with SQL, NoSQL, graph, in-memory databases, Big Data, and Machine Learning. He likes distributed, scalable, always-on systems.
Ingesting over a million rows per second on a single database instance
How would you build a database to support sustained ingestion of several hundreds of thousands of rows per second while running near real-time queries on top?
In this session, I will go over some of the technical decisions and trade-offs we applied when building QuestDB, an open-source time-series database developed mainly in JAVA, and how we can achieve over a million row writes per second on a single instance without blocking or slowing down the reads. There will be code and demos, of course.
We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read replicas, or faster batch ingestion.
@svpino
Director of Computer Vision at Levatas
Santiago is the Director of Computer Vision at Levatas. He has a Master's in Machine Learning from the Georgia Institute of Technology and two decades of experience building software for some of the largest companies in the world. He co-founded bnomial.com, where he publishes daily Machine Learning questions and competitions.
Operationalizing Computer Vision using the Spot robot
Operationalizing computer vision models remains a considerable challenge for companies. In this talk, you'll learn how we leverage Boston Dynamic's SPOT robot and a suite of computer vision models to perform industrial inspections, increasing accuracy and safety. I'll show you some of the challenges we face and our process to deploy these technologies.
@V_Formicola
Engineering Manager at Flo
Engineering Manager at Flo, 10+ years of industry experience (Microsoft/ThoughtWorks) and 3+ years in leadership roles. Experienced with high scale backend systems, legacy modernisation and infrastructure as code across multiple domains and tech stacks. Primarily focused on transformation of software systems and enablement of teams with varied backgrounds. Experienced working with distributed global teams (multi-tz). Creator of knowledge sharing/D&I communities, public speaker and social change advocate.
Tech Leading 101: the Good, the Bad and the Ugly
Tech Leading is a hard job. You need to be an architect, carer, leader, servant, software engineer, business analyst, DevOps engineer, diplomat and a lot more all at the same time… What it means to be the tech lead of a team will be explored, and its joys and responsibilities. More importantly, we will navigate the complex balance of responsibilities and the wide variety of skills needed. Real-life experiences will be shared and lots of advice on becoming the best tech lead you can be. Whether you would like to become one, you currently are doing the job or you are leading tech leads this talk might be interesting for you.
Senior Software Engineer at Red Hat
Valerio is a Senior Software Engineer at Red Hat with 10 years of experience, developing software in enterprise environments as well as startup. He holds a Master's Degree in Computer Engineering and has lived in 4 countries. Open source and sports enthusiast.
Building multi-cloud applications with Skupper
Building a multi-cloud network requires complex VPN configurations and policies. How can we enable secure communication across Kubernetes clusters without these problems? By making Skupper we have been facing these challenges and in this discussion, I want to present what problems we solve and what architectural decisions we use. Finally, a demo to show an actual use case.
@sbykov
SDE at Temporal Technologies
Sergey Bykov is responsible for the architecture of Temporal Cloud, a hosted service that is helping businesses, from large enterprises to tiny startups, to build invincible applications. Prior to joining Temporal Sergey was one of the founders of the Orleans project at Microsoft Research and led its development for over a decade. The mediocre state of developer tools for cloud services and distributed systems at the time inspired him to join the Orleans project in order to qualitatively improve developer productivity in that area. The same passion brought him to Temporal.
Inception or deja vu all over again
In this talk, Sergey will share his team’s experience of bringing Temporal Cloud to life. He will explain architecture of the service, with the primary focus on building its Control Plane. Sergey will cover approaches and patterns used in the process and lessons learned from successes and mistakes along the way. This should be of interest to engineers building or thinking of building multi-tenanted hosted services, which has become the most sustainable way of monetizing open source server products.
@maslankamichal
Software Developer at Redpanda
Michal has 10+ years of experience in software engineering across different industries, focusing on distributed systems. He joined Redpanda in 2019 where he is one of the primary contributors to Redpanda core. Michal is currently responsible for Redpanda's Raft implementation and cluster orchestration bits.
What can we learn from Control Theory ? - a deep insight into Redpanda backlog controller
Redpanda is a Kafka®-compatible streaming data platform that not only handles user writes and reads, but also needs to maintain a tremendous number of background processes—including log compaction, object store uploads, data and leadership balancing. The challenge is that these background tasks compete with the main workload for finite resources, like CPU, I/O bandwidth and memory.
Some of these background tasks share a common property: a growing backlog.
Keeping the backlog size small with minimal influence on write/read path latency requires tuning the number of shares given to the background task.
This talk walks you through our challenges with scheduling background tasks in Redpanda, and brings you behind the scenes to learn how we leveraged the PID controller (an idea borrowed from industrial applications) to manage the amount of shares given for background tasks with a growing backlog.
@iamrashminagpal
Software Engineer at her own
Rashmi is a Software Engineer with a passion for building products in AI/ML. In her almost 4 years career in tech, she’s brought products to life at pre-seed startups, scaled teams and software at hypergrowth unicorns, and shipped redesigns and features used by millions at established giants. When she's not coding, capturing cosmos using her telescope, or playing board games with friends, you can find Rashmi playing with her maltese breed pet dog, Fluffy!
Unearth The Black-Box : Building Fair, Accountable and Trustworthy ML Systems
Have you ever wondered why 87% of machine learning models never make it to production? Who must be held responsible if a machine learning algorithm discriminates or shows bias? Are the decisions taken by these models trustworthy? In this talk, let’s unravel the answers to such complex questions!
Machine learning has had a significant impact in many areas, including medicine, entertainment, security, and education, but its use can also result in increased cognitive dependence on technology and ethical concerns such as bias. Therefore, it is crucial to address these issues by reducing the impact of human biases and creating trustworthy, reliable, and understandable machine learning systems.
The key takeaways of my talk would likely include the importance of understanding and interpreting the decision-making processes of machine learning models, as well as the need to ensure that these models are fair, accountable, and trustworthy in their predictions and actions. Additionally, the talk may highlight the challenges of building interpretable models and the importance of evaluating and testing models for bias, as well as the need for transparency and accountability in the development and deployment of machine learning systems.
@xmal
Professor at Universidad do Porto
Carlos Baquero is a Professor in the Department of Informatics Engineering within FEUP, and area coordinator at the High Assurance Laboratory (HASLab) within INESC TEC. From 1994 till mid-2021 he was affiliated with the Informatics Department, Universidade do Minho, where he concluded his PhD (2000) and Habilitation/Agregação (2018). He currently teaches courses in Operating Systems and in Large Scale Distributed Systems. Research interests cover data management in eventual consistent settings, distributed data aggregation and causality tracking. He worked in the development of data summary mechanisms such as Scalable Bloom Filters, causality tracking for dynamic settings with Interval Tree Clocks and Dotted Version Vectors and predictable eventual consistency with Conflict-Free Replicated Data Types. Most of this research has been applied in industry, namely in the Riak distributed database, Redis CRDBs, Akka distributed data and Microsoft Azure Cosmos DB.
The Legacy of Peer-to-Peer Systems
This talk introduces some of the history and evolution of Peer-to-Peer (P2P) systems in the last 20 years. Although the novelty of the concept has faded, it did still provide many contributions to the design of distributed systems that are still relevant nowadays. The talk address the connections to filesystem research from the 80s, the evolution of DHTs, and its influence on the NoSQL DBs and Blockchains.
Head of Artificial Intelligence at Freepik Company SL
Iván de Prado Alonso is an expert in artificial intelligence and big data. He currently serves as the Head of Artificial Intelligence at Freepik Company, a technology company that produces and distributes graphic assets. He has applied AI to various fields, including agriculture, web scraping, and image retrieval. His curiosity has led him to get involved in startups, found a company, delve deep into big data and distributed systems, and even study economics. However, he has now found a passion for artificial intelligence.
AI Image Generation: From Text to Reality
In this talk, I will explore the fascinating world of AI image generation, a field of research that can produce realistic, creative, and diverse images from text, sketches, or other inputs. I will introduce the main concepts and techniques of AI image generation and showcase some of the best AI image generators. I will also share our own work and experience in developing AI image generators for Freepik, a leading platform for free graphic resources. I hope to inspire you with the amazing capabilities and possibilities of AI image generation, and invite you to join me on this exciting journey.
@ctford
Head of Technology at Thoughtworks
Chris is the Head of Technology for Thoughtworks Spain and an experienced architect and technical advisor. His career has taken him from Australia to the UK, India, Uganda and now Spain. As a consultant, he helps clients with architecture, agile development and organisational effectiveness. Chris was a technical reviewer for Zhamak Dehghani's 2022 book 'Data Mesh'. His personal interests include music-as-code, the art of using functional programming as musical notation.
Data Mesh 101
Data Mesh is a new socio-technical approach to data architecture, first described by Zhamak Dehghani and popularised through a guest blog post on Martin Fowler's site (https://martinfowler.com/articles/data-monolith-to-mesh.html). Since then, community interest has grown, due to Data Mesh's ability to explain and address the frustrations that many organisations are experiencing as they try to get value from their data. The 2022 publication of Zhamak's book on Data Mesh (https://www.oreilly.com/library/view/data-mesh/9781492092384/) further provoked conversation, as have the growing number of experience reports from companies that have put Data Mesh into practice. So what's all the fuss about? On the one hand, Data Mesh is a new approach in the field of Big Data.
On the other hand, Data Mesh is an application of the lessons we have learned from domain-driven design and microservices to a data context. This talk will explain how Data Mesh relates to current thinking in software architecture and the historical development of data architecture philosophies. They will outline what benefits Data Mesh brings, what trade-offs it comes with and when organisations should and should not consider adopting it.
@portovep
Lead Developer at Thoughtworks
Pablo is a Lead Developer for Thoughtworks Spain's Data and Artificial Intelligence Service Line. He is a skilled practitioner with a considerable breadth of experience, from infrastructure to microservices to data engineering. Pablo helps startups build MVPs, scale-ups evolve their teams and delivery practices and big enterprises build reliable infrastructure in the cloud. His current focus is to help his clients build robust, testable and maintainable data architectures.
Data Mesh 101
Data Mesh is a new socio-technical approach to data architecture, first described by Zhamak Dehghani and popularised through a guest blog post on Martin Fowler's site (https://martinfowler.com/articles/data-monolith-to-mesh.html). Since then, community interest has grown, due to Data Mesh's ability to explain and address the frustrations that many organisations are experiencing as they try to get value from their data. The 2022 publication of Zhamak's book on Data Mesh (https://www.oreilly.com/library/view/data-mesh/9781492092384/) further provoked conversation, as have the growing number of experience reports from companies that have put Data Mesh into practice. So what's all the fuss about? On the one hand, Data Mesh is a new approach in the field of Big Data.
On the other hand, Data Mesh is an application of the lessons we have learned from domain-driven design and microservices to a data context. This talk will explain how Data Mesh relates to current thinking in software architecture and the historical development of data architecture philosophies. They will outline what benefits Data Mesh brings, what trade-offs it comes with and when organisations should and should not consider adopting it.
@LandstromSammy
Senior Director, CellRebel Data Engineering at Ookla
Sammy has spent his career working in the borderland between business and tech helping organizations understand, structure and achieve value from data, more recently with a focus on connectivity insights.
Sammy has experience spanning from working deep within the technology side with cloud architecture and development to the business side including Agile Product management, Team management, Business Analysis, Design and Information Architecture.
His areas of expertise include: Cloud architecture, Snowflake, Amazon AWS, Clickhouse, Microsoft Azure, Kimball methodology and dimensional modeling, Agile, Scrum, Service Oriented Architecture, .Net, Python
Connectivity insights with geospatial analytics on Clickhouse
In this talk, Sammy Landstrom will tell the story of our journey evaluating several on the fly analytics engines, to selection and implementation of the Clickhouse engine. We will share our successes, and some challenges we’ve faced, and show some examples of what a hundred billion connectivity measurements per day looks like when measuring the state of global connectivity.
Head of Architecture at The Workshop
Joaquin leads the Architecture team at The Workshop, where he helps design and build highly scalable distributed systems. With over 20 years of experience in software engineering across multiple industries, Joaquin has a particular passion for evolutionary architecture.
Continuous Evolution: Using Fitness Functions to Drive Platform Modernisation
In the era of continuous evolution, software products need to keep up with the rapid pace of change. The products we design and develop must evolve, adapting to support business growth and align with the latest technological changes. In this way, we ensure they can be maintained and built upon, instead of simply becoming legacy.
In this talk, Joaquin will explore how the principles of evolutionary architecture can help us visualise software product evolution, and how these visualisations can be used to prioritise and drive platform modernisation initiatives.
Senior Java Engineer at LeoVegas Group
Vasja is from Albania, where she did her bachelor's degree in Computer engineering. In 2017 she got the opportunity to do her master's degree in Sweden, in the lovely town of Västerås, and after finishing that, she got to start at LeoVegas, which has been a love story ever since 2018. She started as a junior developer and worked her way up to senior, and she has been working extensively with Responsible Gaming since the beginning. I am a big fan of Inter Milan and love to write book reviews in my free time.
Responsible Gaming in iGaming
How can we ensure responsible Gaming is always present and accessible in our industry?
@Hilafish1
Senior DevOps Engineer at Wix
Hila Fish is a Senior DevOps Engineer at Wix, with 15 years of experience in the tech industry.
AWS Community Builder, and a public speaker who believes the DevOps culture is what drives a company to perform at its best and talks about that and other DevOps/Infrastructure topics at conferences.
She carries the vision to enhance and drive business success by taking care of its infrastructure.
In her spare time, Hila is a lead singer of a cover band, giving back to the community by co-organizing DevOps-related conferences (Inc. "DevOpsDays TLV" & "StatsCraft" monitoring-focused event), providing mentorship and managing programs in “Baot” (The largest technical women’s community in Israel), and enjoys sharing her passion and knowledge wherever she can, including across diverse technology communities, initiatives and social media.
Technical Documentation - How Can I Write Them Better and Why Should I Care?
Data collection done by people is a wasteful act and could result in duplicated work by different people. Gathering info for tasks, or for the ability to maintain code or infrastructure - Documentation plays a crucial part in that.
In this talk, I’ll show you a structured way to write a technical doc, without being a technical writer - So everyone could do it to their best ability. I’ll explain why you should care about these docs, and how eventually it serves your best interests (Yes, more than 1). If you want to save your time and other people’s time - Writing documentation well could have a great impact on that.
@mszymani
CEO at Tantus Data
Marcin is a CEO and Hands on Data Engineer at TantusData. He has a lot of hands-on experience with technical problems related to Big Data (Clusters with hundreds of nodes) as well as practical knowledge in business data analysis and Machin Learning. Companies Marcin has worked for or consulted for include: Spotify, Apple, Telia and small startups.
Go big or go…well not one too many. Aka applying machine learning in production
We’ll quickly define a ‘model on production’. There is a myriad of definitions people use right now. A variance wholly justified. Because the exact way we do define it depends. Among other factors on the size of the company, the number of models, the properties of the data, and so on. While we’re at it we’ll also answer some pinning questions such as: Is it ever OK to duct-tape the model deployment process? What about using shortcuts and opting for some manual work? Pithy answers. We’ll pause to take on any questions. If there will be too many to fit, we’ll provide contact information and make sure to address all enquiries after the presentation. Now practice makes perfect. Or nearer perfection at least. So we’ll briefly present two cases of solutions solving the same problem for two clients, both implemented with a vast difference. We’ll explain why. Both solutions required optimising search engine for results that translate into higher revenue. Yet, the companies are on opposite sides of the scale. One a huge, mature retailer, another a much younger and smaller online-booking business. The cost must always be justifiable. So, using these cases we’ll succinctly show how to fit a solution to match the context of the organisation. How to utilise it well. Next, quickly going through some details of the models and infrastructures, we’ll explain the reasoning behind the critical decisions as well as highlight the pros and cons of both approaches.
DevOps Team Lead at Xebia Functional
Guillermo is a Telecom engineer with a backend development background, who transitioned to DevOps when someone had to deploy those new pesky microservices. He is also a Kubernetes administrator and is well-experienced in coaching developer teams to adopt good CI/CD practices.
GitOps CD, lessons learned
GitOps and CD are two terms that you can throw around in a meeting and colleagues will be impressed. "They really must know their stuff", people quietly think. It is widely recognised what improvements and general advantages in the process they bring to a DevOps flow.
In this talk we will explore some of the more advanced implementation quirks, using ArgoCD as an example:
- Release versioning and multi-environment.
- No more Helm hooks?
- Orchestrating deployments: sync waves and waves.
- Another layer of GitOps: automatic app creation.
- Sync windows.
@jotb23
Winner at JOTB
This challenge is open to anyone and everyone, regardless of whether you have a conference ticket or not! Taking place in the evening of the 10th of May, the rules and format are very simple:
At the end of the session, we’ll announce the lucky winner, who will then get the chance to give a 20-minute talk on the subject of their choice in one of the tracks at J On The Beach 2023!
This challenge is free to enter but there are limited spots available, so if you think you've got what it takes register here.
Winner of the Lighnting Talk Challenge
Become a speaker in this slot taking part on the Lightning talk challenge: https://www.jonthebeach.com/lightning-talks
Principal Engineer at Temporal Technologies
My career has followed a bit of a winding road. I started out in the financial markets: Risk Management for a major financial market clearing firm; partner in an Algorithmic Trading company; founder/owner of a hedge fund; president of a German Bank. Then back to my roots as a software engineer developing low-latency, high-volume proprietary trading systems. I moved over to Web 1.0 as Amazon's first Sr. Principal engineer where I led a team building to replace the existing (monolithic) website architecture with one of the earliest Services-Oriented Architectures for a large-scale website. Also designed/wrote Amazon's RPC framework, standard service application framework, source code repository, build system, and a few other things. On to Google, where I designed and led a team in the building of Millwheel, one of the earliest high-scale continuous computation frameworks, and finally now a Principal Engineer at Temporal Technologies, where we are building the foundation of a comprehensive set of tools for Cloud Applications.
Software Organisms
The past and future evolution of software development as we know it.
This talk doesn't try to "prove" anything, except that I have a long history in this industry :)
It's an argument-by-analogy (and therefore immediately suspect, yes?) about how we got to where we are (hint: evolution) and where we are going (hint: more evolution) writing software design in the large. The implication is that the Cloud revolution is still just beginning. If that's so, how will we be building Cloud systems 10 years from now?
This talk takes a stab at answering that question while trying to be "concurrently" interesting and entertaining.