This month - Remote Hadoop jobs
  • Edward Jones
    Preferred timezone: UTC +6

    At Edward Jones we are developing a next-generation data architecture to support our growing business. As part of the Data Management area, you will be tasked with provisioning new datasets for analytic and operational use cases, as well as changes to existing loads.  This will bring you into contact with a diverse array of stakeholders ranging from IS to our analytics stakeholders across the firm. 

    What you'll do

    • Interact directly with requestors from multiple divisions of the firm to understand their documented data requirements
    • Recommend an ETL design based on the requirements of the specific use case and provide accurate estimates of effort
    • Partner with RDBMS DBAs to understand the source data structures and design of the target structures
    • Use the appropriate tools and frameworks available to develop the data acquisition and ingestion process in accordance with the approved design
    • Leverage workflow tools to maintain accurate status of assigned tasks (JIRA, etc.)
    • Ensure the necessary data validation steps are included to ensure completeness and accuracy of data
    • Perform initial validation of the process and inspection of the data
    • Utilize the deployment frameworks available to move artifacts from development to test to production environments
    • Ensure jobs are scheduled to run on a frequency consistent with stakeholder requirements
    • Ensure role-based access is established on target tables
    • Support ETL jobs once deployed in production to ensure SLAs are met for data consumers
  • SemanticBits
    Preferred timezone: UTC +5

    SemanticBits is looking for a seasoned Software Engineer in Test with at least 3+ years of experience testing backend, data-oriented applications. You will play a crucial part in improving quality, speed to delivery, and consistency of software and data used to improve patient care across the country. You'll be working with our engineers, product owners, and technical support teams within an Agile development process to assess risk and help define the required process to build quality in for everything that we ship.

    We strongly believe that the path to high-quality software is through an engineering-focused process that is supported by test engineering and quality assurance. As a team we have worked hard not to cultivate a "toss it over the wall" mentality of sharing work between engineering and test. Instead, we believe in a collaborative approach to defining the necessary steps to build quality into the engineering and release process.

    Our application will be developed using Scala, Spark, Hadoop, and SQL. Data will be in data warehouses, data marts, JSON, CSV, etc. Leveraging modern frameworks to write data-driven tests is required, along with experience integrating automated testing into CI/CD processes.

    While SemanticBits is one of the leading companies specializing in the design and development of digital health services, what makes us unique is not what we do, but rather the culture in which we do it. We are an established company with the mindset of a startup. That means that all of our employees contribute equally to our success. There is no hierarchy to navigate, and by taking advantage of a flexible office environment—as well as a remote workforce across the country—we allow our employees to find the working conditions that are best for their individual success.

    Responsibilities

    • Perform exploratory, manual and automated testing activities as required within sprint release cycle.

    • Develop and implement test automation systems and frameworks for software testing

    • Design robust test plans with a broad system understanding

    • Assess risk with engineering deliverables and define testing strategies to mitigate.

    • Define and own the test engineering stack and toolset.

    • Define and implement testing best practices.

    • Triage, diagnose and debug root-cause and drive them to resolution

    REQUIREMENTS

    • Bachelor’s Degree in Computer Science or related field

    • At least 3 years of experience testing backend, data-oriented application

    • Hands-on experience with different types of testing (Unit, Integration, Data-driven, Exploratory, etc.)

    • Knowledge of relational database concepts and excellent SQL skills

    • Experienced with the linux command-line

    • Experience handling large datasets in CSV and JSON format

    • Expertise validating large datasets using automated and manual tests

    • Experience testing Scala code running in Spark highly desirable

    • Knowledge of professional software engineering best practices for the full software development life cycle, source control, build and release processes, containerization technologies, and competency with test suite development and maintenance

    • Experience with one or more continuous integration tools (e.g. Jenkins), version control systems (e.g. Git)

    • Working knowledge of agile/iterative practices

    • Competency with test case automation frameworks

    • Strong critical thinking, attention to detail, and analytical skills

    • Strong oral and written communication skills

    • Ability to work independently

  • Surge
    PROBABLY NO LONGER AVAILABLE.Must be located: North America.Preferred timezone: UTC +8

    Surge Forward is looking for a smart, self-motivated, experienced, senior-level remote developer to work as a long-term independent contractor.

    Experience Required: 

    Big Data, Hadoop EcoSystem, AWS

    Must be located in the US or Canada to be considered for this role. Sorry, No Visas.

    For immediate consideration, email resume with tech stack under each job and include what versions of Angular you have coded in (directly on the resume) as well as cell phone number and start date.

Older - Remote Hadoop jobs
  • Nagarro
    PROBABLY NO LONGER AVAILABLE.Preferred timezone:

    Required experience and skills: 

    • Expertise in Java or Scala
    • Familiarity with cluster computing technologies such as Apache Spark or Hadoop MapReduce
    • Familiarity with relational and big data such as Postgres, HDFS, Apache Kudu and similar technologies
    • Strong skills in analytic computing and algorithms
    • Strong mathematical background, including statistics and numerical analysis
    • Knowledge of advanced programming concepts such as memory management, files & handles, multi-threading and operating systems.
    • Passion for finding and solving problems
    • Excellent communication skills, proven ability to convey complex ideas to others in a concise and clear manner 

    Desirable experience and skills: 

    • Familiarity with scripting languages such as Python or R
    • Experience in performance measurement, bottleneck analysis, and resource usage monitoring
    • Familiarity with probabilistic and stochastic computational techniques
    • Experience with data access and computing in highly distributed cloud systems
    • Prior history with agile development
  • Nagarro
    PROBABLY NO LONGER AVAILABLE.Preferred timezone:

    Required experience and skills: 

    • Expertise in Java or Scala

    • Familiarity with cluster computing technologies such as Apache Spark or Hadoop MapReduce

    • Familiarity with relational and big data such as Postgres, HDFS, Apache Kudu and similar technologies

    • Strong skills in analytic computing and algorithms

    • Strong mathematical background, including statistics and numerical analysis

    • Knowledge of advanced programming concepts such as memory management, files & handles, multi-threading and operating systems.

    • Passion for finding and solving problems

    • Excellent communication skills, proven ability to convey complex ideas to others in a concise and clear manner 

    Desirable experience and skills: 

    • Familiarity with scripting languages such as Python or R

    • Experience in performance measurement, bottleneck analysis, and resource usage monitoring

    • Familiarity with probabilistic and stochastic computational techniques

    • Experience with data access and computing in highly distributed cloud systems

    • Prior history with agile development

  • NationBuilder
    PROBABLY NO LONGER AVAILABLE.Must be located: United States of America.

    Our engineering team dedicates itself to continuous learning and improvement. We built a process that is optimized for rapid, agile development; deploying to production many times a day. To discover the correct solution; we start with a minimum viable product and iterate using team and stakeholder feedback; so that the people, product, and process improve together.  Work out of our Los Angeles offices, or remotely.

    As a developer, you’ll help us build and maintain our products. You’ll recommend and implement system-wide improvements, new technologies, and contribute to our technological direction.

    NationBuilder creates software for leaders of all kinds - political candidates, nonprofit organizations, and anyone building a community of people to make change happen in the world. To learn more about NationBuilder, read about our mission and beliefs.

    You:

    • are always interested in learning new things.
    • get excited when you have the chance to pair.
    • practice test-driven development and judicious refactoring.
    • enjoy being responsive to customer feedback.
    • are a pragmatic problem solver, knowing that perfect is the enemy of done.
    • work well in small teams with a clear mission.
    • have the insight to know what's important and the dedication to get it done.
    • are comfortable with ambiguity and know how keep us moving forward.

    We are looking for:

    • Have at least 2-5 years of professional development experience

    Our Stack:

    Our platform is primarily built with Ruby on Rails with some additional Ruby and Go services. Data is stored in PostgreSQL, MongoDB, Couchbase, Redis, and Hadoop. All on a cloud native architecture in AWS and managed with Terraform and Puppet.

    NationBuilder is an equal opportunity employer and we value diversity. We are committed to finding talent that is not determined on the basis of race, religion, color, national origin, gender, gender identity, sexual orientation, age, marital status, veteran status, or disability status.

  • phData
    PROBABLY NO LONGER AVAILABLE.

    If you're inspired by innovation, hard work and a passion for data, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of global and enterprise clients.  

    At phData, our proven success has skyrocketed the demand for our services, resulting in quality growth at our company headquarters conveniently located in Downtown Minneapolis and expanding throughout the US. Notably we've also been voted Best Company to Work For in Minneapolis for the last 2 years.   

    As the world’s largest pure-play Big Data services firm, our team includes Apache committers, Spark experts and the most knowledgeable Scala development team in the industry. phData has earned the trust of customers by demonstrating our mastery of Hadoop services and our commitment to excellence.

    In addition to a phenomenal growth and learning opportunity, we offer competitive compensation and excellent perks including base salary, annual bonus, extensive training, paid Cloudera certifications - in addition to generous PTO and employee equity. 

    As a Solution Architect on our Big Data Consulting Team, your responsibilities will include:

    • Design, develop, and innovative Hadoop solutions; partner with our internal Infrastructure Architects and Data Engineers to build creative solutions to tough big data problems.  

    • Determine the technical project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions.  Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews

    • Work across a broad range of technologies – from infrastructure to applications – to ensure the ideal Hadoop solution is implemented and optimized

    • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources

    • Design and implement streaming, data lake, and analytics big data solutions

    • Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines

    • Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths

    • Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)

    • Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software

    • Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala

    • Local Candidates work between client site and office (Minneapolis).  Remote US must be willing to travel 20% for training and project kick-off.

    Technical Leadership Qualifications

    • 5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics

    • Expertise in core Hadoop technologies including HDFS, Hive and YARN.  

    • Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc

    • Expert programming experience in Java, Scala, or other statically typed programming language

    • Ability to learn new technologies in a quickly changing field

    • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries

    • Excellent communication skills including proven experience working with key stakeholders and customers

    Leadership

    • Ability to translate “big picture” business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics

    • Experience scoping activities on large scale, complex technology infrastructure projects

    • Customer relationship management including project escalations, and participating in executive steering meetings

    • Coaching and mentoring data or software engineers

  • Nagarro
    PROBABLY NO LONGER AVAILABLE.
    • Expertise in Java and any of the JVM Languages 
    • Expertise in cloud and open source technologies such as Git, Spark, and Docker
    • Familiarity with relational and big data such as Postgres, Hadoop, NoSQL Databases and columnar Storage formats Parquet
    • Strong skills in analytic computing and algorithms
    • Test Driven development expertise
    • Ability to do define crisp interfaces and thinking the performance, scalability as they are built
    • Passion for finding and solving problems
    • Prior history with agile development
    • Experience in AWS (MUST HAVE)
    • Excellent communication skills, proven ability to convey complex ideas to others in a concise and clear manner
  • Nagarro
    PROBABLY NO LONGER AVAILABLE.
    • Expertise in Java and/or Scala
    • Expertise in cloud and open source technologies such as Git, Spark, and Docker
    • Familiarity with relational and big data such as Postgres, Hadoop, NoSQL Databases and columnar Storage formats Parquet
    • Strong skills in analytic computing and algorithms
    • TDD expertise
    • Ability to do define crisp interfaces and thinking the performance, scalability as they are built
    • Passion for finding and solving problems
    • Prior history with agile development
    • Experience in AWS (MUST HAVE)
    • Excellent communication skills, proven ability to convey complex ideas to others in a concise and clear manner
    • Data store knowledge is important - Postgres, Hadoop, NoSQL Databases and columnar Storage formats Parquet
    • Work in PST time zone
  • Thinkful Inc.
    PROBABLY NO LONGER AVAILABLE.$10,000.00 - $30,000.00.

    Data Science Course Mentor

    • Mentorship

    • Remote

    • Part time

    **Who We Are
    **Thinkful is a new type of school that brings high-growth tech careers to ambitious people everywhere. The company provides 1-on-1 learning through its network of industry experts, hiring partners, and online platform to deliver a structured and flexible education. Thinkful offers programs in web development and data science, with in-person communities in up-and-coming tech hubs around the U.S.

    **About the Role 
    **Thinkful’s Flexible Data Science course pairs personalized mentorship with a curriculum tailored to launch aspiring data scientists’ careers. Join us in helping motivated learners get to those aha! moments. As a Flexible Data Science Course Mentor, you will help your student(s) master everything from fundamental statistics to building a machine learning model in their domain of choice.

    Responsibilities

    • Motivate & foster best practices with your student(s) as they work to build project management skills, a strong portfolio, and the confidence to network and interview for jobs.
    • Work with Program Managers to provide detailed feedback on student success, including struggles or technical mastery issues.
    • Meet one-on-one with your student(s) in hour-long sessions, held twice per week.

    Requirements

    • Minimum of 1 year professional experience as a Data Scientist or demonstrated expertise with data visualizations and machine learning at an industry level
    • Proficiency in SQL, Python
    • Professional experience with Hadoop and Spark a plus
    • Excellent written and verbal communication
    • High level of empathy and people management skills
    • Must have a reliable, high-speed Internet connection

    Benefits

    • This is a part-time role (10-25 hours a week)
    • Fully remote position, with the option to work evenings and weekends in person in 22 US cities
    • Community of 500+ like-minded Educators looking to impact others and keep their skills sharp
    • Full access to all of Thinkful Courses for your continued learning
    • Grow as an Educator

    **Apply on our website:
    **https://hire.withgoogle.com/public/jobs/thinkfulcom/view/P_AAAAAAEAAADC9Bx7fyGYVv

  • Surge
    PROBABLY NO LONGER AVAILABLE.Must be located: North America.Preferred timezone: UTC -8

    SURGE is looking for smart, self-motivated, experienced, senior engineers who enjoy the freedom of telecommuting and flexible schedules, on a variety of software development projects.

    REQUIRED:

    Data Engineer Openings requiring ETL and Hadoop 

    Must be located in the US or Canada to be considered for this role. Sorry, No Visas.

    For immediate consideration, email resume with tech stack under each job and include your cell phone number and start date: [email protected]

  • PowerInbox
    PROBABLY NO LONGER AVAILABLE.$100,000.00 - $140,000.00.Preferred timezone: UTC -5

    If you join us, what will you do?

    Write, test, and maintain computer software that will double our capacity to recommend ads each year for the next three years. Also to provide technical specifications and unit test cases for the software.

    Specific Goals

    • Scale our ad recommender platform to double its’ capacity.
    • Increase revenue per 1,000 items by $0.10 each quarter.
    • Have unit tests to cover all code paths for all code written.
    • Establish continuous integration by writing automatic deployment scripts.
    • Maintain 99% uptime of the software when deployed into production.
    • Write technical specifications for software being created.

    In order to be great at your job…

    You Are

    A faster learner; have great analytical skills; relentless and persistence in accomplishing goals; enthusiastic with an infectious personality.

    You Work

    Efficiently; with flexibility; proactively; with attention to detail; to high standards.

    Together We

    Emphasize honesty and integrity; require teamwork; have open communication; follow-through on commitment; stay calm under pressure.

    You Have

    Advanced Scala skills (at least 5 years of experience); computer science knowledge (education or actual work experience.); Linux experience (at least 5 years); relational database skills; no-SQL database experience; Hadoop skills (at least 5 years); and experience with Kafka, Storm, or Kinesis (at least 5 years in one of them).

    This is extra, but if you have it, it will make us happy

    • Experience in working remotely
    • Knowledge of/interest in the digital and AdTech landscape

    About PowerInbox

    Who We Are

    We are a digital monetization startup ecosystem that is always open to new talent

     Why We Are

    Personalization is key and we at PowerInbox believe that email is not meant to be stationary and static, but relevant and filled with dynamic content and advertisements.

     What We Are

    We at PowerInbox boost your revenue and brand engagement through real-time advertising, and native ad displays.

     If interested please send your resume to [email protected]

  • Noddus
    PROBABLY NO LONGER AVAILABLE.

    About the role

    We are looking for an experienced Data Scientist (Ad-Tech) to join us and be the foundation of our Data Science team as we continue to grow and scale our application.

    • Responsible for processing, cleansing, and verifying the integrity of the data used for analysis.
    • Develop forecasting and reporting procedures that instantly highlight business opportunities and flag potential issues.
    • Conceptualize and build dashboards that are simple, visually appealing, yet showcase all the key data trends and metrics to ease reporting to all business stakeholders.
    • Design and develop machine learning models and algorithms that drive performance and provide insights (e.g., Real Time Bidding algorithms for pacing & optimization, clustering algorithms, lookalike modeling, fraud detection, device identification, cross-device association, ad inventory estimation, audience segmentations, and other Ad-Tech applications.).
    • Rapidly develop a deep understanding of the quantitative methodologies and implementation details that will best power our optimisation engine. Methods used could include Linear Regression models, k-means clustering, Linear Programs, Mixed Integer Programs, and other machine learning and data mining strategies.
    • Develop tools and processes to monitor performance of existing models and implement enhancements to improve scalability, reliability, and performance.
    • Partner closely with Engineering on the architecture and implementation of modeling efforts to ensure performance and scalability.
    • Partner closely with Product on the incorporation of new modeling features into our product set, including UI & API layers.
    • Communicate effectively with Product, Engineering and Sales to identify and define strategic data-intensive projects based on business needs.
    • Create supporting documentation for algorithms and models.
    • Stay abreast of new developments in machine learning and data science, and investigate & develop new approaches to continue innovating.

    We are looking for someone confident with the following background and skills

    • Degree in a quantitative discipline (e.g., Computer Science, Math, Physics, Statistics, Engineering, or similar).
    • Python/Java.
    • Strong quantitative skills with a solid grasp of key concepts in probability, statistics, algorithm design, and machine learning.
    • Experience with Machine-Learning/Big-Data Platforms and modeling frameworks, especially Spark, Hadoop and EMR.
    • Experience with statistical modeling and visualization with Python or R.
    • Experience with SQL and Excel.
    • Strong DNN background with proven experience in TensorFlow building real world solutions on large scale data sets.
    • Knowledge of machine learning, NLP, classifiers, statistical modeling and multivariate optimization techniques.
    • General understanding of data structures, algorithms, multi threading, and distributed computing concepts.
    • Good communication and writing skills (Docs and Collaboration).

    We know that’s already enough but if you go the “extra mile” it will surely make you stand out

    • Docker and Kubernetes.
    • Have a good understanding of online advertising technologies and ecosystem.
    • Experience with Real Time Bidding.
    • Experience working with BI solutions (e.g., Tableau).
    • Communicating with GIFs 😜.

    Benefits

    • Sharp, motivated co-workers.
    • Very flexible work schedule.
    • A flat structure that’s always open to hearing opinions and receiving feedback; we understand that we can constantly improve so we greatly value individuals with an entrepreneurial spirit that are willing to put great ideas forward.
    • You will be part of a product that is seeing an exceptional growth. We are onto something.

    Up for a challenge?

  • phData
    PROBABLY NO LONGER AVAILABLE.

    If you're inspired by innovation, hard work and a passion for data, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of global and enterprise clients.  

    At phData, our proven success has skyrocketed the demand for our services, resulting in quality growth at our company headquarters conveniently located in Downtown Minneapolis and expanding throughout the US. Notably we've also been voted Best Company to Work For in Minneapolis for the last 2 years.   

    As the world’s largest pure-play Big Data services firm, our team includes Apache committers, Spark experts and the most knowledgeable Scala development team in the industry. phData has earned the trust of customers by demonstrating our mastery of Hadoop services and our commitment to excellence.

    In addition to a phenomenal growth and learning opportunity, we offer competitive compensation and excellent perks including base salary, annual bonus, extensive training, paid Cloudera certifications - in addition to generous PTO and employee equity. 

    As a Solution Architect on our Big Data Consulting Team, your responsibilities will include:

    • Design, develop, and innovative Hadoop solutions; partner with our internal Infrastructure Architects and Data Engineers to build creative solutions to tough big data problems.  

    • Determine the technical project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions.  Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews

    • Work across a broad range of technologies – from infrastructure to applications – to ensure the ideal Hadoop solution is implemented and optimized

    • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources

    • Design and implement streaming, data lake, and analytics big data solutions

    • Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines

    • Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths

    • Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)

    • Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software

    • Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala

    • Local Candidates work between client site and office (Minneapolis).  Remote US must be willing to travel 20% for training and project kick-off.

    Technical Leadership Qualifications

    • 5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics

    • Expertise in core Hadoop technologies including HDFS, Hive and YARN.  

    • Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc

    • Expert programming experience in Java, Scala, or other statically typed programming language

    • Ability to learn new technologies in a quickly changing field

    • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries

    • Excellent communication skills including proven experience working with key stakeholders and customers

    Leadership

    • Ability to translate “big picture” business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics

    • Experience scoping activities on large scale, complex technology infrastructure projects

    • Customer relationship management including project escalations, and participating in executive steering meetings

    • Coaching and mentoring data or software engineers

  • Signifyd
    PROBABLY NO LONGER AVAILABLE.

    You will have the opportunity to apply your knowledge of machine learning, statistics and your analytical skills to develop models detecting fraud patterns. You will ideate, test and deploy advanced predictive signals to improve fraud detection performance. You will collaborate with other data scientists and engineers to build data pipelines, do feature prototyping, and write production-grade code to implement analytical algorithms and flexible strategies.

    Specific job duties may include:

    • Writing or modifying data pipelines to process and mine historical data

    • Processing and analyzing data collected with research prototypes

    • Ideation, prototyping, measuring predictive features transforming data into actionable information

    • Prototyping and validating models and algorithms to boost model performance

    • Writing production code (python, SQL, etc.) to deliver analytics content

    Required Skills and Experience:

    • An advanced degree (M.S. or Ph.D) in computer science, applied mathematics, or a comparable analytical field from an accredited institution

    • Experience in analytical team targeting fraud/risk in online commerce, banking and finances

    • Expert proficiency with an advanced data analysis toolkit (such as python/matplotlib, R, ROOT, etc.)

    • Superior SQL skills with proven experience in relational databases and data warehouses

    • Demonstrated fluency with python and at least one other programming language

    • Experience with NoSQL databases and unstructured data

    • Experience setting up and using distributed/parallel processing frameworks such as Spark, Hadoop, Storm etc. is a big plus

    • Demonstrated ability to develop high-quality code adhering to industry best practices (i.e., code review, unit tests, Gitflow)

    • Possession of core analytics skills and expertise (as demonstrated by prior work):

    • Knowledge of applied statistics and key concepts underlying statistical inference and inductive reasoning

    • Experience designing experiments and collecting data

    • Experience developing models based on sensor data, and an understanding of error propagation and the limitations of data subject to measurement uncertainties

    • Demonstrable expertise in one or more areas: applied mathematics, predictive analytics, expert systems, ANNs/deep learning, graph theory, Markov Chain Monte Carlo, geo-informatics (GIS), language processing, risk analysis

    • Work/project history reflective of a self-motivated professional who excels when given open-ended problems and broadly-defined goals, having an innate desire to discover the patterns and relationships in data that can be leveraged to provide business value

    All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.

    Posted positions are not open to third party recruiters/agencies and unsolicited resume submissions will be considered free referrals.

  • TargetSmart
    PROBABLY NO LONGER AVAILABLE.

    Remote Python Software Engineer

    US ONLY - REMOTE

    We are looking for a self-driven software engineer to take ownership of existing software products for performance and feature upgrades as well as general maintenance. You will also contribute as a team member to testing, QA and documentation across our product line. Additional opportunities will arise to design and develop new software products. You will join a small agile team of motivated individuals who welcome challenges, adapt quickly, strive to acquire new knowledge, learn new technologies, accept new responsibilities, and work well individually, as a team, and with other teams within the organization.

    TargetSmart employees communicate primarily through chat, weekly calls, and the occasional email. Working remotely is embraced (US only). We care deeply about each other, our technology, and our mission.

    **Minimum Qualifications
    **

    • Bachelor’s degree in Computer Science or equivalent

    • 3+ years experience in software engineering

    • Strong knowledge of Python

    • US citizen or green card holder

    Desired Technical Experience

    • Python application development

    • AWS: Infrastructure and workflow development

    • SQL, NoSQL, Hadoop ecosystem database (RDS, S3 Data Lake: Athena, Hive, et al., MongoDB)

    • Git, BitBucket, Jira, Slack

    **Desired Domain Experience
    **

    • “Big Data” in the context of large voter/marketing lists

    • Data-driven targeted marketing solutions

    • Digital advertising platforms

    • Democratic/Progressive political campaign technology ecosystem

    Benefits

    • Competitive salary and annual bonus based on company performance

    • Excellent health and dental plans

    • Monthly home office stipend

    • Generous PTO and flexible sick-time policy

    • Flight and hotel to annual company and team meetings

    About TargetSmart

    Founded in 2006, TargetSmart is a for-profit business in the Democratic and progressive political data and technology ecosystem. TargetSmart’s expert team of data, political, direct marketing, and technical professionals wakes up every day with one objective: to help our clients win with data.

    TargetSmart is a leading provider of political data and technology that enables campaigns and organizations to successfully communicate with large audiences, personalize outreach, and create lasting relationships. Our superior politically-focused, consultative approach combines consumer data, databases, data integration and consulting solutions for personalized multichannel marketing strategies. TargetSmart leverages over 25 years of experience in data management to deliver high-performance, reliable data products and solutions.