Associate Professor of Databaseology at Carnegie Mellon University
Andy Pavlo is an Associate Professor (Indefinite Tenure) of Databaseology in the Computer Science Department at Carnegie Mellon University. He is also the co-founder of the OtterTune automated database optimization start-up (https://ottertune.com). He is from the streets.
Why Machine Learning for Automatically Optimizing Databases Doesn't Work
Database management systems (DBMSs) are complex software that requires sophisticated tuning to work efficiently for a given workload and operating environment. Such tuning requires considerable effort from experienced administrators, which is not scalable for large DBMS fleets. This problem has led to research on using machine learning (ML) to devise strategies to optimize DBMS configurations for any application, including automatic physical database design, knob configuration, and query tuning. Despite the many academic papers that tout the benefits of using ML to optimize databases, there have been only a few major success stories in industry in the last decade.
In this talk, I discuss the challenges of using ML-enhanced tuning methods to optimize databases. I will address specific assumptions that researchers make about production database environments that are incorrect and identify why ML is not always the best solution to solving real-world database problems. As part of this, I will discuss state-of-the-art academic research and real-world tuning implementations.
@breckcs
Cloud Platforms Lead at Tesla
Colin leads the cloud platforms organization for Tesla Energy developing real-time services and critical infrastructure for power generation, battery storage, vehicle charging, and grid services. Over the past six years, he has seen these platforms grow from their infancy to become the largest and most integrated platforms for distributed, renewable energy in the world. Previously, Colin worked on the PI System at OSIsoft, a time-series platform for industrial monitoring and automation.
Kubernetes Probes: How to Avoid Shooting Yourself in the Foot
Kubernetes liveness and readiness probes can be used to make a service more robust and more resilient, by reducing operational issues and improving the quality of service. However, if these probes are not implemented carefully, they can severely degrade the overall operation of a service, to a point where you would be better off without them. I will explore how to avoid making service reliability worse when implementing Kubernetes liveness and readiness probes by learning from production incidents.
@kvrajk
Senior Software Engineer at Grafana Labs
Kaviraj is a Senior Software Engineer at Grafana Labs, working on highly scalable storage for Logs, One of the core maintainers of Grafana Loki Open Source project.
In his previous life, writing and maintaining C++ modules for L2/L3 traffic management for telecom backend systems. Moved to web backend and distributed systems later in his career writing bits of Python and Go. Big UNIX fanboy and passionate about OS abstractions and internals.
Getting started with Grafana Loki, a modern logs database
Loki is a modern log database that has different design tradeoffs (index only metadata) compared to traditional logs databases(index everything). This makes Loki easy to use and operate at Peta Byte scale. It can also use cheaper cloud storages (e.g S3) as persistent storage for indexes and chunks.
First, we try to understand why Loki? Why do we even need a modern logs database? The data model of Grafana Loki is different from traditional logs databases. We will explore why that difference matters when handling logs at a huge scale(Peta Byte) and how that makes it easy for both Loki operators and Loki users. We will then explore how to use Loki. We scrape logs from different targets and send them to Loki, then we use LogQL (powerful query language for logs, inspired from PromQL) to get visibility of your logs instead of just distributed grep. We also explore a few best practices on logging patterns that we use internally and how it helps to effectively investigate SLOW query/endpoints of your application and services.
Sometimes logs may not give you the complete picture, We explore how Loki integrates well with metrics and traces to enhance the observability experience of the users. Finally, we touch on some of the new and upcoming features of Loki.
@geeksusma
Senior Software Engineer at Celonis
Jesus is a passionate developer with 18 years of experience and sound knowledge of the software development lifecycle. While his current focus is Java Backend development, in the past Jesus has worked in very different stacks: PHP, Javascript, Typescript, Python, Dart, C++, and even COBOL! Considered one of the foremost XP advocates by the Spanish Community, he is now working at Celonis where he is starting to dive into the process mining and AI worlds. Want to learn more? Besides coding and best practices, Jesus loves Extreme Sports and Punk Rock music.
Can Engineers Save the Planet?
Process Mining has become a major enabler in successfully bringing operational excellence to business processes. The journey to implementing Process Mining will be similar but the outcome (insights and actions) will be unique to each company.
Traditional approaches fail to understand the real-life complexity of processes and also struggle to provide complete insights given the vast amounts of data that are now available. By contrast, Process Mining offers a data-driven and more objective and holistic approach to understanding business processes. As a result, Process Mining has come to dominate a large majority of operational excellence, automation and digitalization ambitions within the industry.
Basically Process Mining will reveal the truth of your data, because the data don’t lie
Process Mining is the leading new technology when it comes to talking about algorithmic businesses - in other words, businesses that use algorithms and large amounts of real-time data to create business value. Global Process Mining Software Market is valued approximately at USD 322.02 Million in 2020 and is anticipated to grow more than 50.1% by 2027.
At Celonis, we’ve been applying Process Mining for over a decade and in this session, I will share the principles, the main algorithms and our approach that have helped us to scale and evolve our platform to support hundreds of companies to optimize their processes and drive business success while helping to create a better and more sustainable world.
@heidiann360
Senior Researcher at Microsoft
I am a Senior Researcher in the Confidential Computing group at Microsoft Research Cambridge. My research sits at the intersection between the theory and practice of distributed computing, with a focus on developing resilient and trustworthy distributed computer systems. Previously, I was a Research Fellow in Computer Science at Cambridge University’s Trinity Hall, an Affiliated/Visiting Researcher at VMware Research, and an Affiliated Lecturer at Cambridge University’s Department of Computer Science and Technology. I received my Ph.D. from Cambridge University in 2019 for my research on Distributed Consensus. I am probably best known for my work on the Paxos algorithm, and in particular, the invention of Flexible Paxos.
Confidential Consortium Framework: Building Secure Multiparty Applications in the Cloud (Without Handing Over the Keys to the Kingdom!)
In the pre-cloud era, computer systems were operated by the organizations which depended upon them. This on-premises approach gave organizations great power over their systems, however, “with great power comes great responsibility” and organizations were left with the ongoing burden of deploying and managing their own infrastructure. Today, Cloud computing has removed much of the responsibility of deploying systems, however, it has also removed much of the power that organizations once had. Organizations must place their trust in the cloud to secure the confidentiality and integrity of their data.
In this talk, I'll consider whether it is possible to regain control over data in the cloud (great power with none of the responsibility) and even enable multiple untrusted parties to compute together on untrusted infrastructure. I’ll introduce the Confidential Consortium Framework (aka CCF), an open-source framework for building a new category of secure multiparty applications with confidentiality, integrity protection, and high availability. CCF utilizes hardware-based trusted execution environments for remotely verifiable confidentiality and code integrity, backed by an auditable and immutable distributed ledger for data integrity and high availability. CCF even enables application developers to bring both their own application logic and a custom multi-party governance model, in the form of a programmable constitution. By the conclusion of this talk, I hope to have convinced you that distributing systems does not necessarily mean distributing trust in the era of confidential computing in the cloud. You can learn more about CCF today at: https://ccf.dev/
@DaveAronson
Software Development Consultant at Codosaurus
Dave is a semi-retired software development consultant (writing code and giving advice about it), with 37 years of professional experience in a wide variety of languages, systems, frameworks, techniques, domains, etc. He is the T. Rex of Codosaurus, LLC (his one-person consulting firm, which explains how he can get such a cool title, at https://www.Codosaur.us/) near Washington, DC, USA. His main focus in software is to spread the gospel of quality, including defining what that even means. In his spare time, he makes mead and teaches others how.
Tight Genes: Intro to Genetic Algorithms
Yes, that’s right, geneTic, not geneRic. Genetic algorithms are a way to “evolve” solutions to a problem, similar to real-world biological evolution. This often reveals great solutions that humans probably would never have thought of, such as the twisty NASA ST5 spacecraft antenna, developed by a genetic algorithm in 2006!
This talk will explain the concept and its terms, and then walk you through some examples, including creating a simple generic genetic-algorithm “runner”, and multiple algorithms for it to run, such as characters to fit D&D classes and mead recipes to yield specified levels of sweetness and strength.
@s0rc3r3r01
VP of Engineering at Contino
Experienced engineering leader with a strong interest in distributed, highly-scalable and cloud-based systems. Currently at Contino as VP of Engineering where he oversees a team of 150+ highly talented, intelligent and thought-provoking technical engineers from a range of disciplines and backgrounds. Worked for years in Technical Leadership roles, focused on infrastructure management and Cloud on the 3 major public cloud providers. Experienced in the finance sector, specifically on high-performance payment platforms and systems compliance. Regular speaker at conferences and meetups. Federico holds an MSc in Software Engineering from City University London Experienced engineering leader with a strong interest in distributed, highly-scalable and cloud-based systems.
Containers in the cloud - State of the Art in 2023
In only a few years, the number of options available to run containers in the Cloud has literally exploded. Each provider now offers tens of “slightly different” services, each with its own minor trade-offs. Furthermore, running your applications in 2023 is definitely not like doing it in 2019: some of the new serverless options offer unique value propositions that shouldn’t be missed. It’s easy to get overwhelmed! This talk will categorize the various options available on AWS, Azure & GCP, focusing on what is state-of-the-art in 2023. We’ll look at Kubernetes and its evolution. Finally, we’ll explain the trade-offs between different categories from a technical and organizational standpoint. We’ll then do a deep dive with a demo on some of the new services that have been recently launched and that are quickly evolving to change the game: GCP Cloud Run, Azure Container Apps, and AWS Fargate.
Deputy CTO at Azul Systems
Simon Ritter is the Deputy CTO of Azul Systems. Simon joined Sun Microsystems in 1996 and spent time working in both Java development and consultancy. He has been presenting Java technologies to developers since 1999 focusing on the core Java platform as well as client and embedded applications. At Azul, he continues to help people understand Java and AzulÕs JVM products.
Simon is a Java Champion and two time recipient of the JavaOne Rockstar award. In addition, he represents Azul on the JCP Executive Committee, the OpenJDK Vulnerability Group as well as the JSR Expert Group since Java SE 9.
The Cloud Native Compiler: JIT-as-a-Service
Adaptive, just in time (JIT) compilation provides a massive performance improvement to JVM-based applications compared to only using an interpreter. The downside of this is that applications have to compile frequently used methods as the application is running. This can lead to reduced throughput and slower response times. Another drawback is that each time an application is started, it must perform the same analysis to identify hot spot methods and compile them. When running an application in the cloud, the elastic nature of resources provides the ability to change and improve the dynamics of how the JIT compiler works. In this session, we'll look at Azul's work to move the JIT compiler into a centralized service that can be shared by many JVMs. This provides many advantages, such as caching compiled code for instant delivery when restarting the same application or spinning up new instances of the same service. In addition, it removes the workload from individual JVMs, allowing them to deliver more transactions per second of application work. Finally, there is the opportunity to apply considerably more compute resources to enable complex optimizations to be used that wouldn't be practical in a single JVM.
@nikhilbarthwal
Senior Software Engineer at Facebook
Nikhil Barthwal is passionate about building distributed systems. He has several years of work experience in both big companies & smaller startups and also acts as a mentor to several startups. Outside of work, he speaks at international conferences on several topics related to Distributed systems & Programming Languages. You can know more about him via his homepage www.nikhilbarthwal.com.
Modeling and Verification of Concurrent & Distributed systems
The majority of Distributed systems are designed as untestable whiteboard drawings. This leads to design flaws that go unnoticed. Human intuition has its limits and these systems operate beyond those limits. This talk describes how we can use mathematical models and to find & eliminate these flaws.
Distributed & concurrent systems have grown exponentially in popularity in recent years. However, the vast majority of these systems are designed as untestable whiteboard drawings. This leads to fundamental design flows that go unnoticed in the design phase leading to hard to find bugs that are expensive to correct.
This talk is about TLA+, a high-level language used for modelling concurrent & distributed systems using simple mathematics. These models can be thought of as blueprint design of software except it is exhaustively-testable.
Human intuition has its limits and most of these systems operate at a scale which is well beyond what we humans can comprehend. TLA+ specifications are written in formal language and they are amenable to finite model checking. The model checker finds all possible system behaviors up to some number of execution steps and examines them for violations of desired invariance properties such as safety and liveness.
The objective of the talk is to demonstrate how TLA+ can be used to eliminate design flaws before system implementation is underway.
@jonatasdp
How to model a Time Series database with TimescaleDB
Storing massive time series data is always a challenge. Modeling it brings several doubts on how to balance heavy data throughput and still easy to query the data. Join the talk to learn the pros and cons of wide versus narrow structures for storing time series data.
This talk is an overview of time-series database storage and how to create a structure that can help you to maintain your systems in the long term.
Timescaledb is a Postgresql extension that brings the time-series superpowers to plain sql. This talk will discuss wide and narrow models and how to leverage the extension features in a common DevOps scenario.
@urlichsanais
Open Source Developer Advocate at Aqua Security
Anaïs is a Developer Advocate at Aqua Security, where she contributes to Aqua’s cloud native open source projects. When she is not advocating DevOps best practices, she runs her own YouTube Channel centered around cloud native technologies. Before joining Aqua, Anais worked as SRE at Civo, a cloud native service provider, where she worked on infrastructure for hundreds of tenant clusters. As CNCF ambassador, her passion lies in making tools and platforms more accessible to developers and community members.
Stranger Danger -- how to proactively identify misconfiguration to minimise system failure
The real cost of misconfiguration for businesses has been set to several trillion over the past years. These costs are the result of misconfiguration in infrastructure and workloads. One of the main techniques to prevent misconfiguration is through the proactive use of security scanners, such as Trivy or Kubescape. The scan results provide us with insights into the security posture of our services over time. However, these scanners treat our resources as static and evaluate misconfiguration only in single instances. To assess the potential impact of misconfiguration in our production environment, we need additional tools. In this talk, we will look at ways Chaos Engineering can help us minimise the potential damage of misconfiguration.
Chaos Engineering is the process of intentionally introducing fault into a system to test its resilience. Anais will walk you through the principles of Chaos Engineering and how it can be used to proactively identify misconfiguration and make our deployment pipeline and services more robust.
R&D Engineer / Innovation at Fortris
Alvaro Lopez has +10 yrs of expertise as a software engineer in the video game industry. In recent years, he has turned his attention to decentralized technologies in terms of cybersecurity and scalability. Now, he holds the position of R&D Engineer at Fortris and is pursuing a Ph.D. at the University of Malaga in decentralized technology and its cybersecurity implications.
The price of scalability in blockchain
This talk examines the challenges of balancing security and performance in blockchain technology. It covers the costs of improving performance, known challenges in blockchain platforms, and the issue of centralization in decentralized systems.
Senior Software Engineer in Data Analytics and Visualization at Epam
Olga Nosenko works 8-9 hours per day as a Senior Software Engineer working with Data Visualisation. Afterwards, she switches to a wine enthusiast. She is the owner of a golden retriever and a travel geek. She has a background in Supply Chain Management but once upon a time, she found herself ready to start all over and switch to IT. And she is happier than ever :)
Unleash your superpower: you do more than others think of you
Have you ever found yourself in a situation where your work was left unseen? Well, it’s easy to receive recognition for putting out fires, rather than preventing ones. I will tell you how I faced the need of making invisible work recognized during my assignment on a Data Visualisation project, give hints on how to do that and why it is crucial for you.
Nevertheless, we work in a fast-developing world and a huge number of new technologies and tools arise every month, what we actually seek is simple solutions to complex problems. But reality hits differently and is often left unseen as well as work that is done by employees while only the final part is presented to stakeholders and other team members.
I will provide you with some hands-on experience demonstrating full-cycle product development of Tableau Dashboard from Design proposition to Server administration.
I will cover examples of some non-trivial tasks of Visualisation engineers conducted in order to enhance the final product and automate the development.
@ArenasAncizar
Engineer Manager at Saber.tech
Ancizar is a seasoned Software Engineering Manager with over a decade of experience in the industry. He has a proven track record of building and delivering complex software projects across various industries and products such as travel & tourism, lottery and gaming, all whilst fostering a culture of innovation and collaboration within his teams.
As a leader, Ancizar believes that a team's success is directly tied to the happiness and growth of its members. He fosters an environment of trust, respect, and collaboration where his team members can thrive and develop their skills to the fullest.
His passion for servant leadership and technology has not only allowed Ancizar to make a positive impact in businesses but also in the teams that he has led and their team members, many of who have grown and developed into successful leaders themselves.
Our journey to distributed architecture - scaling from monolith to SCS
Our goal is to share our experiences, both from a technical and business perspective, with the community and inspire others who are facing similar challenges. Join us for an informative and inspiring talk on scaling software solutions and empowering businesses.
We'll take you on our journey of scaling to a distributed architecture, and how we tackled the challenges that came with it.
We'll cover how we scaled and organised development teams to match the evolving architecture and business needs, and the lessons we learned along the way.
We’ll also cover how we addressed these business needs from an architectural perspective by talking about how we adopted a self-contained system architecture whilst applying domain-driven design principles.
Engineering Manager at Saber.tech
Javier is an experienced Software Engineering Manager with over 15-years’ experience in designing and developing innovative software solutions for businesses of all sizes.
He is passionate about leveraging technology to empower businesses to achieve their goals and drive growth.
As a servant leader, Javier is committed to putting the needs of his team and customers first. He believes in leading by example and empowering his teams to take ownership of their work while providing the support and resources needed to help them succeed. He is dedicated to building strong relationships with his stakeholders, understanding their unique needs and challenges, and working collaboratively to develop tailored software solutions that drive real business results.
Our journey to distributed architecture - scaling from monolith to SCS
Our goal is to share our experiences, both from a technical and business perspective, with the community and inspire others who are facing similar challenges. Join us for an informative and inspiring talk on scaling software solutions and empowering businesses.
We'll take you on our journey of scaling to a distributed architecture, and how we tackled the challenges that came with it.
We'll cover how we scaled and organised development teams to match the evolving architecture and business needs, and the lessons we learned along the way.
We’ll also cover how we addressed these business needs from an architectural perspective by talking about how we adopted a self-contained system architecture whilst applying domain-driven design principles.
@eva_trostlos
Engineer Manager at This Dot, Inc
Eva used to be a frontend developer until she uttered the words "I can see myself leading a team in a couple of years ... maybe". A month later she was managing a group of developers and hasn't looked back since. Delving into coaching techniques and communication theory she created an intuitive approach to engineering management to engage with her fully remote team at This Dot. In her free time, she tends to jump from hobby to hobby and enjoys everything from reading to crafting, trying out new sports, and falling down various research rabbit holes.
A person-centered approach to engineering management
Many companies talk about it, but what does it mean to "put people first"? How can we create successful team environments where everyone feels heard, valued, and supported?
In this talk, we will explore techniques such as active listening, look into person-centered values, and explore communication theory to find actionable ways for personal and professional growth. Both managers and team members will learn how to communicate their needs in a way that fosters a positive and productive work environment.
Whether you're a manager looking to take your team members to the next level or a person ready to become a better communicator and empathic colleague, this talk is for you!
@JohnnyWiller1
Software Engineer at Ocado Technology
Johnny Willer is a software engineer at Ocado Technology, currently working in the Payments domain within the company’s ecommerce stream. He has a Bachelor’s degree in Computer Science, is Java 11 OCP certified and strongly believes that quality software should be the standard rather than the exception. With ten years of experience in software craftsmanship, Johnny has worked on a variety of domains, both in the private and public government sectors. On a more personal note, he is a chess enthusiast and flower stick juggler.
Hexagonal Architecture and Monolith decomposition
Would you like to know how to decompose a monolith into a modular one using design practices like hexagonal architecture driven by use case semantics? This powerful technique enables an application codebase to evolve while minimising the risks of highly coupled, low cohesive modules and fragile tests.
Payments are part of everyday life. Whatever we buy, we need to pay for, right? For consumers, this seems simple, but only engineers understand the deep complexities inside payments – and consumers’ low tolerance for error. After launching the world’s first pure-play grocery retailing website 20 years ago, we’ve developed the Ocado Smart Platform (OSP), our end-to-end online grocery fulfilment solution, adopted by 12 of the world’s most forward-thinking retailers. This fast growth comes with the potential challenge of keeping the codebase clean, cohesive, and low coupled while following market-leading architectural principles like SOLID. To address this, we use a new, innovative architectural approach, based on Hexagonal Architecture driven by Use Case semantics. This architectural approach is being deployed gradually during a Monolith Decomposition. We use the Feature Flag technique quite extensively to select different infrastructure components and achieve other behaviours at runtime. In this session, you’ll also get some valuable tips and tricks to apply this style in your codebase.
Key takeaways:
@thoughtsymmetry
Head of Data Science at Zoopla
Chanuki Illushka Seresinhe is the Head of Data Science at Zoopla, where she manages data scientists both at Zoopla and Hometrack. She is also the founder of beautifulplaces.ai . Chanuki has a PhD in Data Science from the University of Warwick. Her research at the University of Warwick and the Alan Turing Institute, which involved understanding how the aesthetics of the environment influences human well-being, has been featured in the press worldwide including the Economist, Wired, The Times, BBC, Spiegel Online, Guardian, Telegraph, Scientific American, Newsweek and MIT Technology Review. Prior to pursuing a career in data science, she previously ran her digital design consultancy for over eight years.
Unravelling insights about places with computer vision
Images are a rich source of information about our environment, and deep learning has enabled us to extract insights from them with unprecedented accuracy and efficiency. In this talk, we will explore how deep learning can be used to gain insights about places, using images as the primary source of data.
Specifically, I will explain how I used AI to analyse crowd-sourced ratings of over 200,000 images of Great Britain from the online game Scenic-Or-Not to help us understand what beautiful places are composed of. I also trained an AI algorithm to be able to tell how beautiful a place is.
I will also talk about how we are using Deep Learning at Zoopla to gather interesting insights about properties. Ultimately, this talk will demonstrate how deep learning can revolutionise the way we analyse and understand places.
@holanda_pe
COO at DuckDB
Pedro Holanda is a computer scientist with a background in database architectures. He completed his Ph.D. at CWI in Amsterdam, where he specialized in indexing for interactive data analysis. He is a prominent contributor to the open-source database management system, DuckDB. Currently, he is the COO of DuckDB Labs, a company that provides services and support for DuckDB.
In-Process Analytical Data Management with DuckDB
Analytical data management systems have been hard to use, expensive and far removed from the actual processes. However, by revamping these systems to integrate with the application process, data transfer, deployment, and management can be significantly streamlined. This innovative approach opens up a plethora of new possibilities, including edge OLAP, running SQL queries in lambdas, and analyzing Big Data on laptops.
Enter DuckDB, a novel analytical data management system that has been specifically designed for in-process use cases. DuckDB is SQL-enabled, easily integrated as a library, and boasts cutting-edge query processing techniques, such as vectorized execution and lightweight compression. It is also free and open-source software, distributed under the permissive MIT license. In my presentation, I will delve into the reasoning and design principles that underpin DuckDB, and offer a comprehensive overview of its inner workings.
@jxm_math
Software Engineer at Devo
Juanjo Madrigal is a software engineer at Devo. He moved from pure maths (MSc in Advanced Mathematics, Geometry & Topology, UCM) to programming, passing through fields like Computer Vision, Deep Learning and AI. During this journey, he has developed an interest in many different areas like functional programming or computer graphics, and some other less-known ones such as topological data analysis, computer algebra systems or type theory.
Probabilistic streaming algorithms: accuracy and speed at low cost
In recent years, there has been extensive study on algorithms able to process big amounts of streaming data with little memory footprint. Probabilistic algorithms (and probabilistic data structures) come with counter-intuitive good results: they are fast, light and have great accuracy. While the term “probabilistic” usually sounds like randomly picking or discarding data, the fact is that these algorithms show real cleverness and use randomness in unexpected ways to achieve very precise results that use a tiny part of the memory that would need an exact algorithm.
In this talk we will explore the role and functioning of probabilistic algorithms in some common scenarios for massive data like count-distinct or heavy hitters problems. With the help of some examples, we will discuss the desirable properties of a good streaming algorithm, before diving into HyperLogLog++ and Count-Min Sketch algorithms which, despite having simple core ideas, cope with major tasks - like analyzing all search queries on Google in real-time.
Data Visualization Artist at Visual Cinnamon
Nadieh Bremer is a data visualization artist that once graduated as an Astronomer, started working as a data scientist before finding her true passion in the visualization of data. As 2017's "Best Individual" in the Information is Beautiful Awards, and co-writer of "Data Sketches", she focuses on visuals that are uniquely crafted for each specific dataset, often using large and complex datasets while employing vibrant color palettes. She's made visualizations and art for companies such as Google News Lab, Sony Music, UNICEF, the New York Times and UNESCO.
Visualizing Connections
Connections are a part of us, of the world. From the connections between people, between cultures, within language, and more. In these days when more data is collected daily than we could ever hope to explore, the variety in connections being gathered is opening up the possibility to visualize these (often complex) networks. During this talk, Nadieh will take you through the design process of several of her (interactive) data visualization works, from personal projects to client work. The common thread they all share, is that they all reveal connections, but all differently. From a family tree of 3000 people connected to the European Royal houses, to those existing between our Intangible Cultural Heritage created for UNESCO, to connections we have drawn in the night skies, something with cats and dogs, and more. Revealing that all types of connections are unique and revealing the intricacies that lie within them requires a creative, iterative and custom approach.