This week
  • Nagarro
    • Expertise in Java and/or Scala
    • Expertise in cloud and open source technologies such as Git, Spark, and Docker
    • Familiarity with relational and big data such as Postgres, Hadoop, NoSQL Databases and columnar Storage formats Parquet
    • Strong skills in analytic computing and algorithms
    • TDD expertise
    • Ability to do define crisp interfaces and thinking the performance, scalability as they are built
    • Passion for finding and solving problems
    • Prior history with agile development
    • Experience in AWS (MUST HAVE)
    • Excellent communication skills, proven ability to convey complex ideas to others in a concise and clear manner
    • Data store knowledge is important - Postgres, Hadoop, NoSQL Databases and columnar Storage formats Parquet
    • Work in PST time zone
This month
  • Thinkful Inc.
    $10,000.00 - $30,000.00

    Data Science Course Mentor

    • Mentorship

    • Remote

    • Part time

    **Who We Are
    **Thinkful is a new type of school that brings high-growth tech careers to ambitious people everywhere. The company provides 1-on-1 learning through its network of industry experts, hiring partners, and online platform to deliver a structured and flexible education. Thinkful offers programs in web development and data science, with in-person communities in up-and-coming tech hubs around the U.S.

    **About the Role 
    **Thinkful’s Flexible Data Science course pairs personalized mentorship with a curriculum tailored to launch aspiring data scientists’ careers. Join us in helping motivated learners get to those aha! moments. As a Flexible Data Science Course Mentor, you will help your student(s) master everything from fundamental statistics to building a machine learning model in their domain of choice.


    • Motivate & foster best practices with your student(s) as they work to build project management skills, a strong portfolio, and the confidence to network and interview for jobs.
    • Work with Program Managers to provide detailed feedback on student success, including struggles or technical mastery issues.
    • Meet one-on-one with your student(s) in hour-long sessions, held twice per week.


    • Minimum of 1 year professional experience as a Data Scientist or demonstrated expertise with data visualizations and machine learning at an industry level
    • Proficiency in SQL, Python
    • Professional experience with Hadoop and Spark a plus
    • Excellent written and verbal communication
    • High level of empathy and people management skills
    • Must have a reliable, high-speed Internet connection


    • This is a part-time role (10-25 hours a week)
    • Fully remote position, with the option to work evenings and weekends in person in 22 US cities
    • Community of 500+ like-minded Educators looking to impact others and keep their skills sharp
    • Full access to all of Thinkful Courses for your continued learning
    • Grow as an Educator

    **Apply on our website:

  • Surge

    SURGE is looking for smart, self-motivated, experienced, senior engineers who enjoy the freedom of telecommuting and flexible schedules, on a variety of software development projects.


    Data Engineer Openings requiring ETL and Hadoop 

    Must be located in the US or Canada to be considered for this role. Sorry, No Visas.

    For immediate consideration, email resume with tech stack under each job and include your cell phone number and start date: [email protected]

  • PowerInbox
    $100,000.00 - $140,000.00Preferred: (GMT-05:00) Eastern Time

    If you join us, what will you do?

    Write, test, and maintain computer software that will double our capacity to recommend ads each year for the next three years. Also to provide technical specifications and unit test cases for the software.

    Specific Goals

    • Scale our ad recommender platform to double its’ capacity.
    • Increase revenue per 1,000 items by $0.10 each quarter.
    • Have unit tests to cover all code paths for all code written.
    • Establish continuous integration by writing automatic deployment scripts.
    • Maintain 99% uptime of the software when deployed into production.
    • Write technical specifications for software being created.

    In order to be great at your job…

    You Are

    A faster learner; have great analytical skills; relentless and persistence in accomplishing goals; enthusiastic with an infectious personality.

    You Work

    Efficiently; with flexibility; proactively; with attention to detail; to high standards.

    Together We

    Emphasize honesty and integrity; require teamwork; have open communication; follow-through on commitment; stay calm under pressure.

    You Have

    Advanced Scala skills (at least 5 years of experience); computer science knowledge (education or actual work experience.); Linux experience (at least 5 years); relational database skills; no-SQL database experience; Hadoop skills (at least 5 years); and experience with Kafka, Storm, or Kinesis (at least 5 years in one of them).

    This is extra, but if you have it, it will make us happy

    • Experience in working remotely
    • Knowledge of/interest in the digital and AdTech landscape

    About PowerInbox

    Who We Are

    We are a digital monetization startup ecosystem that is always open to new talent

     Why We Are

    Personalization is key and we at PowerInbox believe that email is not meant to be stationary and static, but relevant and filled with dynamic content and advertisements.

     What We Are

    We at PowerInbox boost your revenue and brand engagement through real-time advertising, and native ad displays.

     If interested please send your resume to [email protected]

  • Noddus

    About the role

    We are looking for an experienced Data Scientist (Ad-Tech) to join us and be the foundation of our Data Science team as we continue to grow and scale our application.

    • Responsible for processing, cleansing, and verifying the integrity of the data used for analysis.
    • Develop forecasting and reporting procedures that instantly highlight business opportunities and flag potential issues.
    • Conceptualize and build dashboards that are simple, visually appealing, yet showcase all the key data trends and metrics to ease reporting to all business stakeholders.
    • Design and develop machine learning models and algorithms that drive performance and provide insights (e.g., Real Time Bidding algorithms for pacing & optimization, clustering algorithms, lookalike modeling, fraud detection, device identification, cross-device association, ad inventory estimation, audience segmentations, and other Ad-Tech applications.).
    • Rapidly develop a deep understanding of the quantitative methodologies and implementation details that will best power our optimisation engine. Methods used could include Linear Regression models, k-means clustering, Linear Programs, Mixed Integer Programs, and other machine learning and data mining strategies.
    • Develop tools and processes to monitor performance of existing models and implement enhancements to improve scalability, reliability, and performance.
    • Partner closely with Engineering on the architecture and implementation of modeling efforts to ensure performance and scalability.
    • Partner closely with Product on the incorporation of new modeling features into our product set, including UI & API layers.
    • Communicate effectively with Product, Engineering and Sales to identify and define strategic data-intensive projects based on business needs.
    • Create supporting documentation for algorithms and models.
    • Stay abreast of new developments in machine learning and data science, and investigate & develop new approaches to continue innovating.

    We are looking for someone confident with the following background and skills

    • Degree in a quantitative discipline (e.g., Computer Science, Math, Physics, Statistics, Engineering, or similar).
    • Python/Java.
    • Strong quantitative skills with a solid grasp of key concepts in probability, statistics, algorithm design, and machine learning.
    • Experience with Machine-Learning/Big-Data Platforms and modeling frameworks, especially Spark, Hadoop and EMR.
    • Experience with statistical modeling and visualization with Python or R.
    • Experience with SQL and Excel.
    • Strong DNN background with proven experience in TensorFlow building real world solutions on large scale data sets.
    • Knowledge of machine learning, NLP, classifiers, statistical modeling and multivariate optimization techniques.
    • General understanding of data structures, algorithms, multi threading, and distributed computing concepts.
    • Good communication and writing skills (Docs and Collaboration).

    We know that’s already enough but if you go the “extra mile” it will surely make you stand out

    • Docker and Kubernetes.
    • Have a good understanding of online advertising technologies and ecosystem.
    • Experience with Real Time Bidding.
    • Experience working with BI solutions (e.g., Tableau).
    • Communicating with GIFs 😜.


    • Sharp, motivated co-workers.
    • Very flexible work schedule.
    • A flat structure that’s always open to hearing opinions and receiving feedback; we understand that we can constantly improve so we greatly value individuals with an entrepreneurial spirit that are willing to put great ideas forward.
    • You will be part of a product that is seeing an exceptional growth. We are onto something.

    Up for a challenge?

  • phData

    If you're inspired by innovation, hard work and a passion for data, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of global and enterprise clients.  

    At phData, our proven success has skyrocketed the demand for our services, resulting in quality growth at our company headquarters conveniently located in Downtown Minneapolis and expanding throughout the US. Notably we've also been voted Best Company to Work For in Minneapolis for the last 2 years.   

    As the world’s largest pure-play Big Data services firm, our team includes Apache committers, Spark experts and the most knowledgeable Scala development team in the industry. phData has earned the trust of customers by demonstrating our mastery of Hadoop services and our commitment to excellence.

    In addition to a phenomenal growth and learning opportunity, we offer competitive compensation and excellent perks including base salary, annual bonus, extensive training, paid Cloudera certifications - in addition to generous PTO and employee equity. 

    As a Solution Architect on our Big Data Consulting Team, your responsibilities will include:

    • Design, develop, and innovative Hadoop solutions; partner with our internal Infrastructure Architects and Data Engineers to build creative solutions to tough big data problems.  

    • Determine the technical project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions.  Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews

    • Work across a broad range of technologies – from infrastructure to applications – to ensure the ideal Hadoop solution is implemented and optimized

    • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources

    • Design and implement streaming, data lake, and analytics big data solutions

    • Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines

    • Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths

    • Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)

    • Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software

    • Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala

    • Local Candidates work between client site and office (Minneapolis).  Remote US must be willing to travel 20% for training and project kick-off.

    Technical Leadership Qualifications

    • 5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics

    • Expertise in core Hadoop technologies including HDFS, Hive and YARN.  

    • Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc

    • Expert programming experience in Java, Scala, or other statically typed programming language

    • Ability to learn new technologies in a quickly changing field

    • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries

    • Excellent communication skills including proven experience working with key stakeholders and customers


    • Ability to translate “big picture” business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics

    • Experience scoping activities on large scale, complex technology infrastructure projects

    • Customer relationship management including project escalations, and participating in executive steering meetings

    • Coaching and mentoring data or software engineers

  • Signifyd

    You will have the opportunity to apply your knowledge of machine learning, statistics and your analytical skills to develop models detecting fraud patterns. You will ideate, test and deploy advanced predictive signals to improve fraud detection performance. You will collaborate with other data scientists and engineers to build data pipelines, do feature prototyping, and write production-grade code to implement analytical algorithms and flexible strategies.

    Specific job duties may include:

    • Writing or modifying data pipelines to process and mine historical data

    • Processing and analyzing data collected with research prototypes

    • Ideation, prototyping, measuring predictive features transforming data into actionable information

    • Prototyping and validating models and algorithms to boost model performance

    • Writing production code (python, SQL, etc.) to deliver analytics content

    Required Skills and Experience:

    • An advanced degree (M.S. or Ph.D) in computer science, applied mathematics, or a comparable analytical field from an accredited institution

    • Experience in analytical team targeting fraud/risk in online commerce, banking and finances

    • Expert proficiency with an advanced data analysis toolkit (such as python/matplotlib, R, ROOT, etc.)

    • Superior SQL skills with proven experience in relational databases and data warehouses

    • Demonstrated fluency with python and at least one other programming language

    • Experience with NoSQL databases and unstructured data

    • Experience setting up and using distributed/parallel processing frameworks such as Spark, Hadoop, Storm etc. is a big plus

    • Demonstrated ability to develop high-quality code adhering to industry best practices (i.e., code review, unit tests, Gitflow)

    • Possession of core analytics skills and expertise (as demonstrated by prior work):

    • Knowledge of applied statistics and key concepts underlying statistical inference and inductive reasoning

    • Experience designing experiments and collecting data

    • Experience developing models based on sensor data, and an understanding of error propagation and the limitations of data subject to measurement uncertainties

    • Demonstrable expertise in one or more areas: applied mathematics, predictive analytics, expert systems, ANNs/deep learning, graph theory, Markov Chain Monte Carlo, geo-informatics (GIS), language processing, risk analysis

    • Work/project history reflective of a self-motivated professional who excels when given open-ended problems and broadly-defined goals, having an innate desire to discover the patterns and relationships in data that can be leveraged to provide business value

    All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.

    Posted positions are not open to third party recruiters/agencies and unsolicited resume submissions will be considered free referrals.

  • TargetSmart

    Remote Python Software Engineer


    We are looking for a self-driven software engineer to take ownership of existing software products for performance and feature upgrades as well as general maintenance. You will also contribute as a team member to testing, QA and documentation across our product line. Additional opportunities will arise to design and develop new software products. You will join a small agile team of motivated individuals who welcome challenges, adapt quickly, strive to acquire new knowledge, learn new technologies, accept new responsibilities, and work well individually, as a team, and with other teams within the organization.

    TargetSmart employees communicate primarily through chat, weekly calls, and the occasional email. Working remotely is embraced (US only). We care deeply about each other, our technology, and our mission.

    **Minimum Qualifications

    • Bachelor’s degree in Computer Science or equivalent

    • 3+ years experience in software engineering

    • Strong knowledge of Python

    • US citizen or green card holder

    Desired Technical Experience

    • Python application development

    • AWS: Infrastructure and workflow development

    • SQL, NoSQL, Hadoop ecosystem database (RDS, S3 Data Lake: Athena, Hive, et al., MongoDB)

    • Git, BitBucket, Jira, Slack

    **Desired Domain Experience

    • “Big Data” in the context of large voter/marketing lists

    • Data-driven targeted marketing solutions

    • Digital advertising platforms

    • Democratic/Progressive political campaign technology ecosystem


    • Competitive salary and annual bonus based on company performance

    • Excellent health and dental plans

    • Monthly home office stipend

    • Generous PTO and flexible sick-time policy

    • Flight and hotel to annual company and team meetings

    About TargetSmart

    Founded in 2006, TargetSmart is a for-profit business in the Democratic and progressive political data and technology ecosystem. TargetSmart’s expert team of data, political, direct marketing, and technical professionals wakes up every day with one objective: to help our clients win with data.

    TargetSmart is a leading provider of political data and technology that enables campaigns and organizations to successfully communicate with large audiences, personalize outreach, and create lasting relationships. Our superior politically-focused, consultative approach combines consumer data, databases, data integration and consulting solutions for personalized multichannel marketing strategies. TargetSmart leverages over 25 years of experience in data management to deliver high-performance, reliable data products and solutions.