About Onehouse Onehouse is a mission-driven company dedicated to freeing data from data platform lock-in. We deliver the industry’s most interoperable data lakehouse through a cloud-native managed service built on Apache Hudi. Onehouse enables organizations to ingest data at scale with minute-level freshness, centrally store it, and make available to any downstream query engine and use case (from traditional analytics to real-time AI / ML). We are a team of self-driven, inspired, and seasoned builders that have created large-scale data systems and globally distributed platforms that sit at the heart of some of the largest enterprises out there including Uber, Snowflake, AWS, Linkedin, Confluent and many more. Riding off a fresh $35M Series B backed by Craft, Greylock and Addition Ventures, we're now at $68M total funding and looking for rising talent to grow with us and become future leaders of the team. Come help us build the world's best fully managed and self-optimizing data lake platform! The Community You Will Join When you join Onehouse, you're joining a team of passionate professionals tackling the deeply technical challenges of building a 2-sided engineering product. Our engineering team serves as the bridge between the worlds of open source and enterprise: contributing directly to and growing Apache Hudi (already used at scale by global enterprises like Uber, Amazon, ByteDance etc) and concurrently defining a new industry category - the transactional data lake. The Impact You Will Drive: As an engineer on the Open Source team at Onehouse, you'll play a pivotal role in shaping and realizing the vision and roadmap for Apache Hudi, while also shaping the future of data lakehouse space. Collaborate across multiple teams within Onehouse, serving as the vital bridge between the open-source Apache Hudi project and Onehouse's managed solution, ensuring seamless collaboration and integration. Engage closely with community partners and contributors, serving as a steward of the Apache Hudi project, fostering collaboration and guiding its evolution. Champion a culture of innovation, quality and timely execution, enabling the team to deliver on the vision of the next-generation data lakehouse. Architect and implement solutions that scale to accommodate the rapid growth of our customer base, open source community and the ever-expanding demands of the datalake ecosystem at large. A Typical Day: Build, design and deliver features/improvements to Apache Hudi. Ensure high quality and timely delivery of innovations and improvements in Apache Hudi. Dive deep into the architectural details of data ingestion, data storage, data processing and data querying to ensure that Apache Hudi is built to be the most robust, scalable and interoperable data lakehouse. Own discussions and work with open source partners/vendors to: troubleshoot issues with Hudi, ensure Hudi support in for compute engines like Pretso/Trino and act as the face of Hudi to the community at large via meetups, customer meetings, talks etc. Partner with and mentor engineers on the team. What You Bring to the Table: 5-7+ years building large-scale data systems. You embrace ambiguous/undefined problems with an ability to think abstractly and articulate technical challenges and solutions. Positive attitude towards seeking solutions to hard problems, with a bias towards action and forward progress. An ability to quickly prototype new directions, shape them into real projects and analyze large/complex data. Strong, object-oriented design and coding skills with Java, preferably on a UNIX or Linux platform. Experience with inner workings of distributed (multi-tiered) systems, algorithms, and relational databases. Experience with large scale data compute engines / processing frameworks. Experience building distributed and/or data storage systems or query engines. An ability to prioritize across feature development and tech debt, balancing urgency and speed. An ability to solve complex programming/optimization problems. Robust and clear communication skills. Nice to haves (but not required): Experience working with open source projects and communities. Experience in optimization mathematics (linear programming, nonlinear optimization). Existing publications of optimizing large-scale data systems in top-tier distributed system conferences. PhD or Masters degree in a related field with industry experience in solving and delivering high-impact optimization projects. How We'll Take Care of You
In our tuition-free Tech Fundamentals program, you will master the basics of IT with real-world applications.What's Included... ...weeks of instructor-led virtual training- Opportunity for a paid internship... ...readiness workshops- Job placement assistance with access to a wide...
...Kentucky, Harland Sanders at the age of 40 began feeding hungry travelers. Sanders perfected his secret blend of 11 herbs and spices, as well as the basic cooking techniques. There are now over 24,000 KFC outlets in more than 145 countries and territories around the world....
...are open to students who are available for a 3-month summer internship (May - August). Please indicate your availability on the application... ...in school with requisite educational experience in Packaging Engineering ~ A well-rounded skill set including strong time-...
...or higher version2+ years of legal support experienceHOURS &... ...including opportunity for remote work) is established on a project-... ...project. We seek attorneys to assist with document review, privilege... ...weeks agoAttorney, Work From Home, Remote Document Review...
Direct message the job poster from Mastech Digital.Minimum Requirements Minimum three (3) years of experience with PC based tools, Microsoft Office applications, Web-based applications.Masters degree in management, Business Administration, Nutrition Science, Culinary...