Loading…
DevConf.US 2022 has ended
Registration is now OPEN! Please register HERE as soon as possible!

DevConf.US 2022 is the 5th annual, free, Red Hat sponsored technology conference for community project and professional contributors to Free and Open Source technologies coming to Boston this August!!

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Future Tech & OpenResearch [clear filter]
Thursday, August 18
 

10:30 EDT

Kepler: Sustainability in Computing Proposal
In 2021, an ACM technology brief estimated that the information and communication technology (ICT) sector contributed between 1.8% and 3.9% of global carbon emissions. As organizations aim to improve their sustainability credentials, they will inevitably consider the impact of computing in terms of hardware and software. Companies are also under pressure from governments to adopt more sustainable practices. Initially proposed in July 2021 and still awaiting approval, the European Energy directive on energy efficiency has policies that will require any datacenter, however small, to do energy audits every four years and report findings. Hyperscale facilities are going to have to report their energy audits annually. Currently, the energy consumption metrics are only available at node levels. There is no way to obtain container-level energy consumption. Autoscalers and schedulers really need pod-level metrics data in order to obtain energy savings from resizing or migrating containers. We present Kubernetes-based Efficient Power Level Exporter (Kepler) and its integration with Kubernetes. By leveraging eBPF programs, Kepler probes per container energy consumption related system counters and exports them as metrics. These metrics help end users observe their containers’ energy consumption and allow cluster admins to make intelligent decisions on achieving energy conservation goals. We demonstrate that the Kepler can be easily integrated into Prometheus and the existing dashboard.

Speakers
KL

Kaiyi Liu

Software Engineering Intern, Red Hat
Kaiyi Liu is a Software Engineering intern in the emerging technologies group on the sustainability team at Red Hat. Kaiyi Liu is a fourth year Computer Science Student at the University of Toronto. At Red Hat, he has developed tools for computing power prediction and energy oriented... Read More →
avatar for Parul Singh

Parul Singh

Senior Software Engineer, Red Hat
Parul Singh is a Senior Software Engineer in the emerging technologies group within the Red Hat Office of the CTO. She is responsible for researching emerging technology trends and developing cloud-native prototypes that address the identified challenges and opportunities and inform... Read More →


Thursday August 18, 2022 10:30 - 10:55 EDT
East Balcony

11:00 EDT

The Next Phase of IoT - Information and Visualization
The built environment is changing with the addition of the virtual environment. More than ever data and analytics are permeating every aspect of our day to day lives and work with buildings being no different. The future of building design and operations is therefore going to be live streaming information via Digital Twins.

Learn about how Digital Twins has been changing the industry and what benefits they can bring for your project and assets. In this session we will review what a Digital Twin is, how they work, what it takes to create one and the benefits for building owners and operators. We will also get into the practical needs for specifying a Digital Twin deliverable and what kind of platforms can support their operation. So don’t get left out on the evolution of Digital Twins, check out this talk!

Learning objectives:

Learn what a Digital Twin is and how they correspond to the built environment

Understand what kind of tools are available for creating a Digital Twin model

Learn about the data assets you need and the formats to use for a Twin - IFC, BCF, JSON, etc

Become able to request a Digital Twin deliverable in your next project with sample guidelines

Speakers
avatar for Tadeh Hakopian

Tadeh Hakopian

Developer, HMC
Tadeh is a developer and designer in Architecture (buildings not computers). He has been a course author, trainer and open source contributor. Over the years he has taught other designers the value of coding and automation and wants to continue to spread that message to as many people... Read More →


Thursday August 18, 2022 11:00 - 11:50 EDT
East Balcony

13:00 EDT

Open Hardware Initiative Series: Reinforcement Learning based HLS Compiler Tuning
Despite the proliferation of Field Programmable Gate Arrays (FPGAs) in both the cloud and edge, the complexity of hardware development has limited its accessibility to developers. High Level Synthesis (HLS) offers a possible solution by automatically compiling CPU codes to custom circuits, but currently delivers far lower hardware quality than circuits written using Hardware Description Languages (HDLs). This is because the standard set of code optimizations used by CPU compilers, such as LLVM, are not suited for an FPGA backend. In order to bridge the gap between hand tuned and automatically generated hardware, it is thus important to determine the optimal pass ordering for HLS compilations, which could vary substantially across different workloads. Since there are dozens of possible passes and virtually infinite combinations of them, manually discovering the optimal pass ordering is not practical. Instead, we will use reinforcement learning to automatically learn how to best optimize a given workload (or a class of workloads) for FPGAs. Specifically, we investigate the use of reinforcement learning to discover the optimal set of optimization passes (including their ordering and frequency of application) for LLVM based HLS - a technique for compiler tuning that has been shown to be effective for CPU workloads. In this talk, we will present the results of our experiments aimed at exploring how HLS compiler tuning is impacted by different strategies in reinforcement. This includes, but is not limited to: i) selection of features, ii) methods for reward calculation, iii) selection of agent, iv) action space and v) training parameters. Our goal will be to identify strategies which converge to the best possible solution, take the least amount of time for doing so, and provide results which can be applied to a class of workloads instead of individual ones (to avoid retraining the model).

Speakers

Thursday August 18, 2022 13:00 - 13:25 EDT
East Balcony

13:30 EDT

Open Hardware Initiative Series: Dynamic Infrastructure Services Layer for FPGAs
FPGAs have long filled crucial niches in networking and edge by combining powerful computing/communication, hardware flexibility and energy efficiency. However, there are challenges in development and design portability in FPGAs: the entire hardware stack is commonly rebuilt for each deployment.
Operating System-like abstractions, referred to as Shells or hardware Operating Systems (hOS), can help reduce the development complexity of FPGA workloads by connecting the IP blocks needed to support core functionality e.g. memory, network and I/O controllers. However, existing hOS have a number of limitations, such as, use of IP blocks which cannot be modified, fixed resource overhead, tightly coupled IP blocks, and unique interfaces which reduces design portability. As a result, existing hOS are typically only useful for specific workloads, interfaces, vendors and hardware deployed in a specific infrastructure configuration (e.g SmartNIC).
In this work, we present the Dynamic Infrastructure Services Layer (DISL) for FPGAs as a solution to the above limitations. DISL is a framework that allows developers to generate hOS that can be either generic or customized based on user requirements such as the target workload, FPGA size, FPGA vendor, available peripherals etc. DISL does so through a number of features such as: i) use of open source, heavily parameterized, and vendor agnostic IP blocks, ii) a modular layout and configurable interconnect, iii) standard Application Programming Interfaces (APIs) at both the inter and intra device level, iv) automatic detection of an application’s hOS requirements for components and connectivity (both compile-time and run-time) during compilation, and v) a DISL software development kit (SDK) which is integrated into the Linux kernel and gives user access to tools for configuring, monitoring, debugginging and various other utilities that reduce the complexity of developing, deploying and interfacing FPGA workloads.


Thursday August 18, 2022 13:30 - 14:20 EDT
East Balcony

14:30 EDT

Open Hardware Initiative Series: Optimizing open source tooling for FPGA bitstream
The flexibility, high performance and power efficiency of Field Programmable Gate Arrays (FPGAs) has resulted in greater ubiquity in both cloud and edge environments. However, the existing state-of-the-art vendor tooling for FPGA bitstream generation lacks a number of features that are critical for high productivity, which in turn results in long turnaround times (hours to days) and substantially limits the manner in which FPGAs can be used. Since this tooling is also closed source, it cannot be modified to incorporate additional functionality. On the other hand, while there are a number of open source alternatives, these tools currently only deliver a fraction of the hardware quality as vendor tooling - thus making their use impractical for most workloads. Our work is aimed at bridging this gap between open-source and vendor tooling for FPGA bitstream generation, in order to make the former a viable solution to the low productivity in FPGA development. To do so, we first build a synthetic benchmark set which can be used to identify and analyze policy decisions made by tools that impact generated hardware quality. Next we apply these benchmarks to open source tools in order to determine bottlenecks or suboptimal policies. Finally, we optimize the identified policies - this can be done manually or through reinforcement learning (in order to automatically determine the best strategy for a given design). To demonstrate the effectiveness of our approach, we apply it to Packing - a critical step in the bitstream generation process which impacts device resource utilization. The open source tool we have used is Versatile Place and Route (VPR). In this talk, we will look at the details of packing policies, sythetic benchmarks that we built and the metrics we developed to determine packing quality.

Speakers
SV

Shachi Vaman Khadilkar

Student, University of Massachusetts-Lowell


Thursday August 18, 2022 14:30 - 15:20 EDT
East Balcony

15:30 EDT

Open Hardware Initiative Series: Relational Memory: Native In-Memory Stride Access
Over the past few years, large-scale real-time data analytics has soared in popularity as the demand for analyzing fresh data is growing. Hence, modern systems must bridge the need for transactional and analytical, often referred as Hybrid Transactional/Analytical Processing (HTAP). Analytical systems typically use a columnar layout to access only the desired fields. On the contrary, storing data row-first works great for accessing, inserting, or updating entire rows. But, transforming rows to columns at runtime is expensive. So, many analytical systems ingest row-major data and eventually load them to a columnar system or in-memory accelerator for future analytical queries. However, these systems generally suffer from high complexity, high materialization cost, and heavy book-keeping overheads.
How will this design change if the optimal layout was always available?
We present a radically new approach, termed Relational Memory (RM), that converts rows into columns at runtime. We rely on a hardware accelerator that sits between the CPU and main memory and transparently converts base data to any group of columns with minimal overhead. To support different layouts over the same base data, we introduce ephemeral variables, a special type of variables that never instantiated in main memory. Instead, upon accessing them, the underlying machinery generates a projection of the requested columns according to the format that maximizes data locality.
We implement and deploy RM in a commercially available platform that includes CPUs and FPGA. We demonstrate that RM provides a significant performance advantage; accessing the desired columns up to 1.63x faster than row-wise counterpart, while matching the performance of columnar access for low projectivity, and outperforming it by up to 1.87x as projectivity increases. Our next steps include supporting selection in hardware to reduce unnecessary data movements and integrating the proposed design within a DDR4 memory controller.


Speakers
AS

Ahmed Sanaullah

Senior Data Scientist, Red Hat Inc.
avatar for Ulrich Drepper

Ulrich Drepper

System Research & Data Science, CTO Office, Red Hat
Data Scientist, CTO Office


Thursday August 18, 2022 15:30 - 15:55 EDT
East Balcony

16:00 EDT

Open Hardware Initiative Series: Q & A Panel Discussion
Over the past few years, large-scale real-time data analytics has soared in popularity as the demand for analyzing fresh data is growing. Hence, modern systems must bridge the need for transactional and analytical, often referred as Hybrid Transactional/Analytical Processing (HTAP). Analytical systems typically use a columnar layout to access only the desired fields. On the contrary, storing data row-first works great for accessing, inserting, or updating entire rows. But, transforming rows to columns at runtime is expensive. So, many analytical systems ingest row-major data and eventually load them to a columnar system or in-memory accelerator for future analytical queries. However, these systems generally suffer from high complexity, high materialization cost, and heavy book-keeping overheads.
How will this design change if the optimal layout was always available?
We present a radically new approach, termed Relational Memory (RM), that converts rows into columns at runtime. We rely on a hardware accelerator that sits between the CPU and main memory and transparently converts base data to any group of columns with minimal overhead. To support different layouts over the same base data, we introduce ephemeral variables, a special type of variables that never instantiated in main memory. Instead, upon accessing them, the underlying machinery generates a projection of the requested columns according to the format that maximizes data locality.
We implement and deploy RM in a commercially available platform that includes CPUs and FPGA. We demonstrate that RM provides a significant performance advantage; accessing the desired columns up to 1.63x faster than row-wise counterpart, while matching the performance of columnar access for low projectivity, and outperforming it by up to 1.87x as projectivity increases. Our next steps include supporting selection in hardware to reduce unnecessary data movements and integrating the proposed design within a DDR4 memory controller.


Speakers
SV

Shachi Vaman Khadilkar

Student, University of Massachusetts-Lowell
avatar for Ulrich Drepper

Ulrich Drepper

System Research & Data Science, CTO Office, Red Hat
Data Scientist, CTO Office
AS

Ahmed Sanaullah

Senior Data Scientist, Red Hat Inc.


Thursday August 18, 2022 16:00 - 16:25 EDT
East Balcony
 
Friday, August 19
 

14:00 EDT

What's the Latest with Research in Open Source?
Corporate research departments are often pretty siloed, secretive even. Academia can be siloed in its own way, even if individual researchers collaborate—as well as being out of touch with current industry concerns. Open source software can close that gap and turn it into a virtuous cycle.

In this talk, Red Hat’s Gordon Haff will cover some of the things we’ve learned at Red Hat Research in putting together a new type of research program rooted in industry-academia collaboration and open source. He’ll cover the early-on collaboration through the Mass Open Cloud (MOC). Built on open source projects, most notably Kubernetes and OpenStack, the MOC is a great case study of a breeding ground for open source innovation, where software is continuously developed, integrated, optimized, and enhanced in a real-world cloud setting—but one that is not subject to the constraints placed by large commercial public cloud providers.

Other exciting areas of ongoing research range from unikernels to FPGAs to self-tuning systems at scale to preserving privacy in datasets. Come learn about what’s happening in open source on the cutting edge.

Speakers
avatar for Gordon Haff

Gordon Haff

Technology Advocate, Red Hat
Gordon Haff is Technology Advocate at Red Hat where he works on market insights; writes about tech, trends, and their business impact; and is a frequent speaker at customer and industry events. Among the topics he works on are edge, AI, quantum, cloud-native platforms, and next-generation... Read More →



Friday August 19, 2022 14:00 - 14:25 EDT
East Balcony

15:00 EDT

IPFS: What is it and Why Should I Care?
IPFS is a protocol which allows data to be distributed for storing and accessing files, websites, applications, and data. In contrast to traditional location-addressed schemes, where accessing a photo requires you to specify the image’s exact location (e.g. https://my-server.com/koala.jpg), IPFS allows users to provide a global network with a Content Identifier (CID), and receive a list of anonymous peers that the image can be retrieved from. The user may then provide the image back to the network, increasing the amount of available concurrency for future downloads. In other words, the more popular something is, the quicker it will be to download. This also allows the content to be verifiable, since we can recalculate the downloaded content’s CID to confirm it matches what we asked for. Currently, IPFS has a strong foothold with Web3 projects, as it allows data to be easily stored in a decentralized manner. The audience will gain an understanding of how IPFS works, and how it can be used to efficiently distribute data.

Speakers

Friday August 19, 2022 15:00 - 15:25 EDT
East Balcony

15:30 EDT

Sigstore & Ferris: Rust in Supply Chain Security
Sigstore is gaining momentum as a new standard for signing, verifying and protecting software. It aims to improve supply chain technology for anyone using open source projects: it is created for open source maintainers, by open source maintainers.

Rust is a systems programming language known for its speed and built-in emphasis on security. While many of the tools in the Sigstore ecosystem are written in Go, some portions of these tools are now being ported to Rust, which will allow them to be available for more diverse use cases and environments. In addition to giving an overview of Sigstore, this session will cover uses of Rust in Sigstore and how these compare with their Go counterparts.

Together these two security-focused efforts can complement each other and make security in open source more usable and accessible. Learn how Sigstore can make software signing and key management easier!


Speakers
LS

Lily Sturmann

Senior Software Engineer, Red Hat
Lily is a senior software engineer at Red Hat in the Office of the CTO in Emerging Technologies. She has primarily worked on security projects related to remote attestation and confidential computing, and more recently on securing the software supply chain. She has spoken at numerous... Read More →
avatar for Jyotsna Penumaka

Jyotsna Penumaka

Software Engineer, Red Hat
Jyotsna is a software engineer at Red Hat in the Office of the CTO in Emerging Technologies. Her interests center around cyber security and operating systems. She has previously conducted research on control flow integrity mechanisms, as well as Enarx, a deployment system enabling... Read More →


Friday August 19, 2022 15:30 - 15:55 EDT
East Balcony
 
Filter sessions
Apply filters to sessions.