Senior Solutions Architect
Senior Solutions Architect
We are Upsolver! Our mission is to turn cost-effective cloud data lakes into easy-to-use repositories for all data practitioners, by eliminating the lake’s notorious engineering complexity. Upsolver cuts 95% from the tedious process of preparing event data for analytics and machine learning, using a visual SQL-based interface, a cloud-native data platform, and deep technology for stream-processing, indexing, and eventually-consistent file systems.
Upsolver is already serving data-driven customers around the world, processing hundreds of petabytes every month. Upsolver’s platform is recommended repeatedly by AWS for its outstanding ease-of-use, for even the most ambitious use cases, and it’s the only officially recommended partner for the AWS Athena service. We are a small, highly results-oriented team with deep technical expertise who love to simplify and scale data infrastructure.
Upsolver’s Senior Solutions Architect drive customer satisfaction with a solution-based approach and deep technical expertise in streaming, big data and cloud. A successful Senior Solutions Architect is curious, self-motivated, and excels in cross-functional collaboration. The goal is to understand customer needs and challenges around streaming data and become a trusted valuable advisor.
- Engage with prospects and customers through Product demos, POCs, on-boarding, and day-to-day use of the platform.
- Tailor big data solutions using Upsolver’s product, streaming platforms like Apache Kafka, and complimentary cloud platforms like Athena, Redshift etc..
- Address competitive analysis inquiries.
- Build content for blogs, documentation, videos, and any other assets that explains how to use the Upsolver platform to solve real-world use cases.
- Customer Success aspect - Dive deep into the customer organization alongside the sales in order to find new opportunities.
What We Look For
- At least 2 years of experience as Data Architect/Head of Data, specifically in transformation-heavy data pipelines.
- At least 3 years as Data Engineer/Front-End DBA
- Experience with modern databases and query engines (Presto, Spark, Redshift, Snowflake, BigQuery) - preferred.
- Deep experience in big data and ideally the data engineering space is required. Strong knowledge of Hadoop, Spark or NoSQL.
- Extensive Experience with AWS - Mandatory
- Proven track record to run high-velocity activity, you are able to context switch easily, thrive when multitasking, fast pace is what you prefer.
- Stream processing knowledge and experience is highly desirable (Kafka, Flink, Kinesis)
- Comfortable with talking up and down the IT chain of command including directors, managers, architects, analysts and developers.
- Familiarity with a full range of data engineering approaches, covering theoretical best practices and the technical applications of these methods.
- Work in a lean, highly collaborative team with a focus on getting things done.
Treat the company like it’s yours and earn the customer’s trust; while we must work together as one team, we value integrity first by treating our co-workers, customers, and partners like we would like to be treated ourselves. We obsessively listen to our customers and our teammates to truly understand. We believe in creating the best technology for big data. For us, this means always insisting on the best possible processing performance, the lowest possible infrastructure cost, elasticity, and flexibility to support any possible processing use case. We achieve this by simplifying and inventing solutions that are an order of magnitude easier compared to what our customers currently use. Finally, we value communicating fearlessly by disagreeing and committing. Harmony can be the enemy of excellence and quality. We believe in challenging decisions and deliverables when we disagree, even when doing so is uncomfortable or difficult. We expect our team to be tenacious and not to compromise for the sake of social cohesion. Once a decision is settled, commit wholly.
Something looks off?