This week - Remote Etl jobs
  • Elastic

    At Elastic, we have a simple goal: to solve the world's data problems with products that delight and inspire. As the company behind the popular open source projects — Elasticsearch, Kibana, Logstash, and Beats — we help people around the world do great things with their data. From stock quotes to Twitter streams, Apache logs to WordPress blogs, our products are extending what's possible with data, delivering on the promise that good things come from connecting the dots. The Elastic family unites employees across 30+ countries into one coherent team, while the broader community spans across over 100 countries.

    We’re looking to add a new Product Management node to the Elastic Product cluster. At Elastic, we believe that the data analytics journey starts with data ingestion, and real-time insights requires data to be ingested scalably, securely, and ready for immediate usage in Elasticsearch. Beats are a suite of lightweight data shippers that enable users to collect and ship all types of data from logs and metrics to wire and security data.

    As a Senior Product Manager for Beats, you’ll be involved in all things data collection and ingest, from directly impacting the roadmap to the overall go-to-market strategy. If you’re passionate about data ingestion and want to democratize data for all with open source software, then this job is for you!

    What You Will Be Doing

    • Engage with customers, users, and internal teams to understand use cases and product requirements, bringing those insights back to engineering
    • Work with our engineers and designers on features across all Beats (Filebeat, Metricbeat, Functionbeat, etc.) like growing integrations, improving the core engine, and taking ease of use to the next level with centralized management capabilities
    • Work with product marketing to showcase our ingest features and educate our global sales, support, and consulting teams
    • Track and improve KPIs around adoption and usage of Beats

    What You Bring Along

    • 2+ years of proven track record in product management
    • Excellent spoken and written English communication skills
    • Technical understanding of data ingestion mechanics and architecture
    • Prior experience with data collection agents or stream analytics products
    • Curiosity, empathy, and a collaborative spirit

    In Addition, You May Have

    • Bachelor’s degree in a technical field (e.g. CS, CSE, EE) or relevant work experience in software development, DevOps, solutions architecture, or pre-sales engineering
    • Hands-on experience with logging, metrics, and/or security analytics products and its audiences (Operations, Data Science, DevOps, SecOps, InfoSec)
    • Comfort working with a highly distributed team across the world
    • Open source software and/or commercial open source company experience
    • Familiarity with the Elastic Stack along with its products and use cases

    Elastic is an Equal Employment employer committed to the principles of equal employment opportunity and affirmative action for all applicants and employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status or any other basis protected by federal, state or local law, ordinance or regulation. Elastic also makes reasonable accommodations for disabled employees consistent with applicable law.

  • phData

    Are you inspired by innovation, hard work and a passion for data?    

    If so, this may be the ideal opportunity to leverage your Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of clients.  

    At phData, our proven success has skyrocketed the demand for our services, resulting in quality growth and an expanded presence at our company headquarters conveniently located in Downtown Minneapolis (Fueled Collective).

    As the world’s largest pure-play Big Data, Machine Learning and Data Science services firm, our team includes Apache committers, Machine Learning experts and the most knowledgeable Scala development team in the industry. phData has earned the trust of customers by demonstrating our mastery of Big Data and Machine Learning services and our commitment to excellence.

    In addition to a phenomenal growth and learning opportunity, we offer competitive compensation and excellent perks including base salary, annual bonus, extensive training, paid Cloudera certifications - in addition to generous PTO and employee equity.

    As a Machine Learning Engineer, your responsibilities include:

    • Convert proof of concepts to production-grade solutions that can scale for hundreds of thousands of users

    • Create and manage machine learning pipelines on a Hadoop cluster to support any kind of model deployment on streaming or batch data.

    • Tackle challenging problems, such as developing web services and ETL pipeline components, to productize and evaluate machine learning models

    • Write production code and collaborate with Solutions Architects and Data Scientists to implement algorithms in production

    • Design, conduct, and analyze experiments to validate proposed ML modeling approaches as well as improvements to existing ML pipelines

    Qualifications

    • Previous experience as a Software Engineer, Data Engineer or Data Scientist (with hands-on engineering experience)
    • Solid programming experience in Python, Java, Scala, or other statically typed programming language
    • Hands-on experience in one or more big data ecosystem products/languages such as Spark, Impala, Solr, Kudu, etc
    • Experience working with Data Science/Machine Learning software and libraries such as h2o, TensorFlow, Keras, scikit-learn, etc.
    • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
    • Excellent communication skills; previous experience working with internal or external customers
    • Strong analytical abilities; ability to translate business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access, and consumption, as well as custom analytics
    • 4 year Bachelor's Degree in Computer Science or a related field, or equivalent years of professional working experience.

    Keywords: Hive, Apache Spark, Java, Apache Kafka, Big Data, Spark, Solution Architecture, Cloudera, Apache Pig, Hadoop, NoSQL, Cloudera Impala, Scala, Python, Data Engineering, Big Data Analytics, Large Scale Data Analysis, ETL, Linux, Kudu, Pandas, TensorFlow, h2o, R, Keras, PyTorch, scikit-learn, Machine Learning, Machine Learning Engineering, Data Science, PySpark, NLP

  • phData

    Are you inspired by innovation, hard work and a passion for data?    

    If so, this may be the ideal opportunity to leverage your Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of clients.  

    At phData, our proven success has skyrocketed the demand for our services, resulting in quality growth and an expanded presence at our company headquarters conveniently located in Downtown Minneapolis (Fueled Collective).

    As the world’s largest pure-play Big Data, Machine Learning and Data Science services firm, our team includes Apache committers, Machine Learning experts and the most knowledgeable Scala development team in the industry. phData has earned the trust of customers by demonstrating our mastery of Big Data and Machine Learning services and our commitment to excellence.

    In addition to a phenomenal growth and learning opportunity, we offer competitive compensation and excellent perks including base salary, annual bonus, extensive training, paid Cloudera certifications - in addition to generous PTO and employee equity.

    As a Machine Learning Engineer, your responsibilities include:

    • Convert proof of concepts to production-grade solutions that can scale for hundreds of thousands of users

    • Create and manage machine learning pipelines on a Hadoop cluster to support any kind of model deployment on streaming or batch data.

    • Tackle challenging problems, such as developing web services and ETL pipeline components, to productize and evaluate machine learning models

    • Write production code and collaborate with Solutions Architects and Data Scientists to implement algorithms in production

    • Design, conduct, and analyze experiments to validate proposed ML modeling approaches as well as improvements to existing ML pipelines

    Qualifications

    • Previous experience as a Software Engineer, Data Engineer or Data Scientist (with hands-on engineering experience)
    • Solid programming experience in Python, Java, Scala, or other statically typed programming language
    • Hands-on experience in one or more big data ecosystem products/languages such as Spark, Impala, Solr, Kudu, etc
    • Experience working with Data Science/Machine Learning software and libraries such as h2o, TensorFlow, Keras, scikit-learn, etc.
    • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
    • Excellent communication skills; previous experience working with internal or external customers
    • Strong analytical abilities; ability to translate business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access, and consumption, as well as custom analytics
    • 4 year Bachelor's Degree in Computer Science or a related field, or equivalent years of professional working experience.

    Keywords: Hive, Apache Spark, Java, Apache Kafka, Big Data, Spark, Solution Architecture, Cloudera, Apache Pig, Hadoop, NoSQL, Cloudera Impala, Scala, Python, Data Engineering, Big Data Analytics, Large Scale Data Analysis, ETL, Linux, Kudu, Pandas, TensorFlow, h2o, R, Keras, PyTorch, scikit-learn, Machine Learning, Machine Learning Engineering, Data Science, PySpark, NLP

This month - Remote Etl jobs
  • CipherHealth

    CipherHealth is an award-winning healthcare technology company with a suite of intuitive patient engagement solutions that streamline the responsibilities of hospital staff, increase patient involvement and satisfaction, and positively influence outcomes.

    Our team is incredible and mission driven. We are an award-winning company that strives to do better every day. In addition to fun team activities, a company masseuse, and a ton of healthy snacks, we are driven by our work ethic, commitment to core values, and firm knowledge that we are helping people every day.

    Job Responsibilities:

    • Identifies deep-rooted, systemic issues within the stack and leads long-term initiatives to address them
    • Has deep expertise across multiple domains
    • Takes ownership of technical projects that have a strategic impact at CipherHealth
    • Assists the technical lead on meeting goals related to long-term roadmap for major areas of the data engineering stack
    • Introduces improvements that help the entire team become more productive and ship higher quality work
    • Articulates the technical direction of the data engineering team
    • Partners effectively with the technical lead and business stakeholders on strategic projects

    Technical Responsibilities:

    • Responsible for delivering technical solutions and capabilities on the platform to support our entire data platform
    • Implement new, highly scalable platform components and tools and leveraging machine learning and deep learning models to solve healthcare problems
    • Responsible for owning the data pipeline (ETL) from MongoDB to DWH
    • Build features to support reporting and data science projects

    Requirements:

    • Master’s degree or equivalent experience
    • 10 years of data engineering experience
    • Experience with production grade applications

    High proficiency with the following technologies:

    • Programming Languages

    • Ruby, R, SQL/noSQL, Bash, Javascript

    • Technologies

    • MongoDB , Pentaho PDI, Unix, Github

    • Web Framework

    We are also open to remote candidates outside of NYC. 

  • FBS

    Want to apply your programming expertise in a team environment that values your contribution and gives you the freedom to create and the resources to get the job done while having true ownership in a growing company that's been leading real estate innovation for more than 40 years? At FBS, the creator of real estate software Flexmls, it's all possible. We're 100% employee owned and looking for selected, passionate, and talented data conversion programmers to join our Flexmls Data Conversion team.

    Skills & requirements

    Flexmls data conversion programmers work with our new customers to convert and load data from their existing MLS (multiple listing system) databases into our core databases.  FBS's continued revenue growth relies on this team's successful conversion of customer data from customers' prior databases.

    Mid to expert level C and C++ skills are strongly desired but a strong background in another language and a willingness to learn C and C++ will work too. We also work heavily with our own and external APIs so the ability to use APIs for data transfer is important.  Knowledge of any of the following is helpful but not required.  

    • Moving data between data storage systems using C or C++
    • OOP experience, preferably with C or C++
    • Automated testing experience, again, preferably with C or C++
    • SQL (Postgres, DB2, MySQL) and NoSQL (MongoDB, Redis, Memcached) experience
    • Version control experience, e.g. Git
    • Linux administration experience
    • PHP experience
Older - Remote Etl jobs
  • Supermercato24
    PROBABLY NO LONGER AVAILABLE.

    Who are we looking for:

    We are looking for a savvy Data Engineer to join our growing tech team.

    You will support our software developers, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.

    The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.

    Roles and Responsibilities:

    • you will build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and other big data technologies
    • you will design and model data structures to help analyzing our business and technical data
    • you will support existing processes running in production
    • you will work together with people from other key areas to assist with data-related technical issues and support their data infrastructure needs

    Skills & Requirements

    • knowledge in relevant engineering best practices, data management fundamentals, data storage principles, and be current with recent advances in distributed systems as it pertains to data storage and computing
    • 2+ years of experience in designing, building and maintaining data architecture(s) and infrastructure(s), both relational and non-relational
    • 2+ years of maintaining data warehouse systems and working on large scale data transformation using SQL, Hadoop, Hive, or other Big Data technologies; experience with ETL tools is a plus
    • 2+ years of data modeling experience, and able to use data models to improve the performance of software services
    • experience with Cloud Based Solution (AWS Redshift, GCP Big Query) and programming language (Python, Java) is a plus
    • experience communicating with colleagues from engineering, analytics, and business backgrounds
    • degree in Engineering, Math, Statistics, Computer Science, or related discipline or equivalent experience is a plus.
    • be able to legally work in Europe (you are the holder of a EU Passport or you are the holder of EU residency permit or you are the holder of a Schengen Work Visa)
  • Autosoft
    PROBABLY NO LONGER AVAILABLE.$75,000.00 - $95,000.00.

    Autosoft, Inc. (www.autosoftdms.com) is a Dealership Management System (DMS) software company serving the retail automotive industry for 30 years. At Autosoft, we embrace change, encourage out of the box thinking, and listen to our employees to help shape the future of our business. Who You Are: As the ETL Developer, you will participate and lead discussions around new feature requirements and develop database functionality to meet the requirements. Continually seeking ways to improve the process of data conversion, migration and integration from the current applications, APIs and various data sources to new systems.

    What You'll Do:

    • Develop stored procedures, functions, etc. in T-SQL
    • Create database objects including tables, views, triggers, etc.
    • Perform data modeling for new applications or extensions of existing applications
    • Ability to design ETL flows and data mapping/data migration process to ensure successful data migration from legacy systems
    • Execute performance improvements through optimizing queries, adding indexing, or other methods
    • Build and execute data migration scripts
    • Build database release scripts and execute them in development, staging, and production environments
    • Experience with ETL and data migration activities
    • Profile data to measure quality, integrity, accuracy and completeness
    • Advise business on options, risks and any impacts related to data conversion and migration
    • Turn process into detailed functional requirements, including but not limited to data mapping, file inventories, and data flow
    • Serve as a functional SME to development team as they implement the data needs as documented
    • Communicate as necessary, to ensure transparency, share insights, define scope, requirements, and coordinate between teams to manage expectations

    What You'll Need:

    • Bachelor’s degree in Computer Science, Information Systems and/or equivalent formal training or work experience
    • Minimum 5 years of experience in development using Microsoft SQL Server (2012 or newer) – Standard and Enterprise
    • Advanced knowledge of SSIS and SSRS
    • Data Modeling, Data Archiving, Query & Performance Analysis Experience
    • Advanced T SQL skills, ORM, Stored Procedures, Table Functions
    • Experience working with multi-tenant databases
    • Proven work experience on an enterprise-level projects
    • Working experience in an Agile development environment strongly preferred
    • Excellent communication skills, both written and oral
    • Superior organization, prioritization, analytical and problem-solving skills
    • Manage challenging timelines and schedules
    • History of implementing stable, robust solutions quickly and efficiently
    • Ability to adapt to changing assignments and multiple priorities
    • Ability to manage multiple tasks and successfully meet deadlines collaboratively with other departments.
    • Experience with Git repositories
    • Familiar with using data formats JSON and XML
    • Basic understanding of API end points (SOAP and REST)
    • Experience with Agile SCRUM and KANBAN Development process
    • Experience defining data objects, models and data definitions for financial or automotive business systems by mapping its various components as required for Data Migration/Conversion/Integration
    • Experience in SQL index management, performance optimization with large datasets, data warehouse/data mining
  • O'Reilly Auto Parts
    PROBABLY NO LONGER AVAILABLE.

    The Data Acquisition (ETL) Developer will design, structure, and develop Informatica data warehouse program logic. The Extract Transform Load (ETL) Developer is responsible for the programs required to extract, transform, clean, and move data and metadata so they can be loaded into a data warehouse, data mart, or operational data store.  In addition, he/she will lead the work of other ETL Developers and work closely with Data Analyst and Data Modelers to develop enterprise data integration solutions that promote re-usability and standardization.

    Essential Job Functions:
    •    Design, code, and test major features, as well as work jointly with other team members to provide complex software enhancements for databases
    •    Collaborate with functional team and other team leads to create integration solutions
    •    Apply generally accepted programming standards and techniques to assure efficient program logic and data manipulation.
    •    Lead design and review meetings with all appropriate parties
    •    Ensure design, structure, and content meet performance guidelines; assist with test planning and testing..
    •    Develop detailed documentation including functional and technical design documents, test cases, support documentation, migration and installation steps, release notes, mapping documents etc.
    •    Provide high-level analysis and design reviews to solve conceptual problems and avoid duplication of efforts within the development group.
    •    Analyze the performance of current code as well as new code to ensure reasonable performance is met.
    •    Communicate with all levels of staff with regard to technical concepts and design in order to create the best product which meets all of the customer requirements.
    •    Communicate with project teams on problem resolution, design issues, and technical implementations.

    Skills/Requirements/knowledge
    REQUIRED:  
    •    5+ years of experience as an ETL Developer using Informatica Enterprise Integration Tools (within an EDW):
    o    PowerCenter 9.x
    o    PowerExchange 9.x
    •    Experience leading and developing medium to large scale Operational and Decision Support based Data Integration projects including Real-Time and Webservice based initiatives
    •    Extensive experience with SQL with relational databases (SQL Server, DB2)
    •    Experience with relational and dimensional data modeling
    •    Experience building and operating a data warehouse
    •    Knowledge of Data Profiling and Data Quality concepts and techniques  
    •    Critical thinker with excellent analytical and problem solving skills
    •    Advanced knowledge of industry standards/best practices surrounding all aspects of the Software Development Life Cycle
    DESIRED:  
    •    Bachelor’s Degree
    •    Exposure to data modeling tools like Visio, Embarcadero, Erwin
    •    Master Data Management experience and Metadata Management experience
    •    Experience with Microstrategy

  • Toptal
    PROBABLY NO LONGER AVAILABLE.

    As the Data Engineering Manager you will lead and develop a comprehensive data strategy to align our data technology offering to Toptal’s strategic business initiatives.

    This is a remote position that can be done from anywhere.

    Responsibilities:

    In this role, you will own the development and ongoing leadership of an exceptional data engineering and data science function supporting the entire Toptal business. Initial responsibilities will include establishing a proactive and data-driven team that is focused on the delivery of accurate data and information. You will be tasked with ensuring that the technical requirements to support the business analytics function is in alignment with our overall technical vision and roadmap. You will establish data governance best practices that align with the overall analytics vision.

    You will provide strong leadership and direction of a team tasked with development and execution of a comprehensive Data Engineering strategy, while providing timely and robust insights for decision-making that drives critical business decisions supporting Toptal’s aggressive growth plans.

    You will work in an entirely distributed company and help define an entirely new space while learning about how an organization is built from the ground up.

    In the first week you will:

    • Onboard and integrate into Toptal.
    • Rapidly begin learning about Toptal’s history, culture, and vision.
    • Shadow key teams across the company to learn the core of Toptal’s operations and capabilities.

    In the first month you will:

    • Complete a current state assessment.
    • Define processes that govern the intake and prioritization of business requests, estimating, and ensure accurate and timely completion of work in progress.
    • Catalog and communicate what is available to the business in current state.
    • Stabilize ETL flows.
    • Validate that the data store is well architected, makes sense from a business use perspective, and make recommendations for improvement.
    • Evaluate current ad-hoc reporting capabilities and recommend future state to ensure agile response.

    In the first three months you will:

    • Partner with stakeholders to define the purpose and long term vision of the role that the Data Engineering Team serves in Toptal.
    • Recommend, gain agreement, and restructure the Data Engineering Teams to ensure efficient execution.
    • Clearly define what blend of data vision and exploration versus serving the functional needs is appropriate.
    • Provide leadership and direction to the Data Engineering Team to overhaul existing technologies with a goal to have a scalable, robust ETL layer and a near-real time data store that supports both the business and KPI reporting.
    • Dig into the data to validate designs, performance, data granularity, data recency, and more. You will not just make high-level recommendations.
    • Outline a high-performing and reliable data systems architecture.
    • Create OKRs for the Data Engineering Team that align with Company-wide function attributes / Key Results.

    In the first six months you will:

    • Develop advanced analytic capabilities by providing a vision for how data science and machine learning capabilities can propel our business forward.
    • Rapidly gain an understanding of the business requirements and design roadmaps that respond to Toptal’s short and long-term strategic direction.
    • Provide role clarity to the existing team and determine and close gaps which will drive future hiring decisions.
    • Hire and retain the best team that can propel the Data Engineering vision forward.
    • Define and Implement a Data Governance Framework that covers Data Security, Integrity and DRP/BCP.

    In the first year you will:

    • Grow the Data Engineering Team to support the aggressive growth trajectory that Toptal has planned.

    Requirements:

    • Proven capability in building and managing a data organization supporting multiple teams.
    • Self-starter who is able to be hands on and can analyze, interpret and derive insights into business reports, analytic models, and data sources.
    • Ability to learn business processes at a near native level to understand the role that analytics plays in supporting the business.
    • Ability to interpret business requirements into data structures, lineage(business purpose), and metadata required.
    • Working knowledge of master data management.
    • Demonstrated experience with modern data technologies including, columnar data structures, unstructured data, NoSql, machine learning, and cognitive.
    • Knowledge of foundational BI concepts such as 3NF, denormalization, OLAP, and slowly changing dimensions.
    • Exposure to scala, python, pandas, and familiarity with open source BI tools.
    • Experience working with Agile Project Management methodologies.
    • Experience coaching and growing individuals in analytics, data affluency, and reporting.
    • Ability to negotiate competing priorities in the business.
    • Exceptional communication skills.
    • Ability to build relationships at an executive level.
    • You must be a world-class individual contributor to thrive at Toptal. You will not be here just to tell other people what to do.
  • Surge
    PROBABLY NO LONGER AVAILABLE.

    SURGE is looking for smart, self-motivated, experienced, senior engineers who enjoy the freedom of telecommuting and flexible schedules, on a variety of software development projects.

    REQUIRED:

    Data Engineer Openings requiring ETL and Hadoop 

    Must be located in the US or Canada to be considered for this role. Sorry, No Visas.

    For immediate consideration, email resume with tech stack under each job and include your cell phone number and start date: [email protected]

  • Integrated Data Services (IDS)
    PROBABLY NO LONGER AVAILABLE.

    Senior Data Engineer

    Remote Work Allowed

    Company Overview:

    Integrated Data Services, Inc. (IDS) is a leading provider of custom software products and Government financial management services. IDS was founded in 1997 in El Segundo, CA, and since that time has seen tremendous growth and success. Currently IDS has offices supporting customers nationwide. By providing customers with fast, efficient and reliable information systems and support services, IDS has become a preferred provider of financial and programmatic systems, services, and solutions across a wide variety of Government agencies.

    Position Description:

    IDS is seeking a Senior Data Engineer with an expert understanding of data integration, data warehousing and APIs. You will be responsible for integrating data across multiple systems, improving existing data warehouse architecture and assisting with our transition to AWS. We are looking for an ambitious self-starter with proven technical and analytical capabilities and the tenacity to develop ideas independently and thrive in a fast-paced environment. This role is for an engineer who loves to roll up their sleeves, dive in, and tackle any problem with speed and precision.

    Responsibilities include, but are not limited to:

    • Participate in the design of our overall data collection strategy, including technology, data pipelines, and visualizations.
    • Build a scalable data insight platform that makes every decision data driven.
    • Maintain and expand ETL and reporting tools.
    • Creatively solve complex problems while understanding the scope and long-term impact of your work
    • Communicates with the Product Management and development teams to raise issues and identify potential barriers in a timely fashion.
    • Develop an understanding of key business, product and user questions.
    • Mentor other engineers and promote knowledge sharing.

    Physical & Mental Qualifications:

    • Must be able to lift/carry at least 15 lbs.
    • Must be able to remain in a stationary position 80% of the time.
    • Must consistently work and type on a computer and may be required to move about inside the office to access file cabinets, office supplies, etc.

    Knowledge and Skills:

    • At least 5 years of experience as a data engineer.
    • At least 5 years of experience developing ETL or ELT solutions.
    • At least 5 years of experience with SQL.
    • At least 2 years of experience working with Talend Data Integration.
    • Experience in analyzing ETL requirements and implementing complex ETL jobs.
    • Ability to interface effectively with team members from all functional disciplines.
    • Exceptional problem-solving skills and the ability to rapidly analyze complex technology and business scenarios.
    • Experience with AWS and Amazon S3 desired.
    • Experience with MS Office (Word, Excel, PowerPoint), Atlassian JIRA and Confluence desired.
    • Experience with Department of Defense (DoD) finance, contracting, acquisition and logistics systems is a plus.

    Education and Work Experience:

    This position requires a minimum of a 4-year degree from an accredited college or university in business
    management, engineering, computer science, mathematics, accounting, economics or other related discipline. Experience in lieu of education may be considered if the individual has 7 or more years of relevant experience.

    Certificates and Licenses:

    Applicants selected for employment will be subject to a Federal background investigation and must meet additional eligibility requirements for access to classified information or materials.

    Travel:

    Travel will be required.

    Hours:

    Normal work schedule will be 8:00 A.M. to 5:00 P.M., Monday through Friday.  May be required to work additional hours and/or weekends, as needed, to meet deadlines or to fulfill travel obligations.

    Salary Range:

    Commensurate with experience.

    Company offers full-benefits package including health, dental, vision and 401K plans.  IDS is an Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regards to race, creed, age, sex, gender, physical or mental disability, sexual orientation, gender identity, gender expression, ancestry, pregnancy perceived pregnancy, medical condition, marital status, familial status, color, religion, uniformed services, veteran status, national origin, genetic information, or any other characteristic protected under local, state or Federal law.  A submission of a resume is an expression of interest and not considered an application.

    For more information, visit www.get-integrated.com. To apply, please send a resume and cover letter to [email protected] and reference code: IDS-DE-VL.

    **U.S. citizenship and/or green card is required; H1-B visas and other visas are not being sponsored. Relocation expenses are NOT compensated.  All jobs are employer paid; no fees to candidates. Third parties or agencies inquiries are not being accepted.**

  • Daily Kos (Kos Media LLC)
    PROBABLY NO LONGER AVAILABLE.$100,000.00 - $110,000.00.

    LOCATION: Remote within the US or Oakland, CA

    Daily Kos is the nation’s largest liberal online political community, news organization, and activism hub. Powered by millions of dedicated activists, we’re transforming media and organizing by empowering regular Americans to reshape politics.

    We are seeking an independent, self-aware, remote-work experienced Front End Data Visualization Developer who will be responsible for writing code that is used by millions of people who care about progressive politics and are working to make real political change. You’ll join with a committed team of writers, researchers, and engineers as we plan ahead to the 2020 elections, and beyond. Building on our recent Elections site redesign, you’ll have an opportunity to bring readers new insight into our elections coverage in a visual, data driven way.

    Our highly regarded Elections team produces both unique editorial content and one-of-a-kind quantitative data daily, in order to explain what’s happening in every important election across the country. You’ll help present all of this information to our readership in new and accessible ways, including data visualizations, interactive maps, poll charts, and more. You’ll also be responsible for building, maintaining, and enhancing the databases that power the front end of our Elections site.

    QUALIFICATIONS:

    • Expert knowledge of D3/JavaScript graphing technologies
    • Experience in graphics programming, interactive graphics, data visualization, and data analysis
    • Excellent problem solving, design, development, and debugging skills
    • Excellent relational database skills—both data modeling and query building skills.
    • Experience with large databases — ETL, RedShift or other columnar store, etc.
    • Interest in politics, elections, and political data
    • Knowledge of statistics
    • Python/R
    • React
    • Elasticsearch

    The above list is a set of highly desired attributes; we will evaluate each candidate as a whole. Tell us about unique skills, experiences, and attributes that you can bring to such a role that we may be missing and why it would be a benefit to the organization.

    SALARY RANGE:
    $100,000 - $110,000

    BENEFITS:

    This position is a 40 hour/week, full-time exempt position. Candidates must be legally eligible to work in the United States. The position offers a flexible work environment, the ability to work remotely or from home, competitive salary, excellent benefits including: full medical, dental and vision benefits, optional 401K with a company match, professional development stipend, a generous vacation package, as well as employer-paid maternity/family leave. Our organizational commitment to personal growth and work-life balance reduces churn and encourages a very rewarding long term position.

    At Daily Kos, we believe that the diversity of ideas, experiences, and cultures that our employees contribute to our organization help us be more effective activists, and we are proud to be an inclusive and equal opportunity workplace. We have a team of amazing people with different backgrounds and talents that are energized by the day’s news events, and people united by common cause. We’re a company that loves learning and supports growth and training for all our employees.

    Women, people of color, and LGBTQIA individuals strongly encouraged to apply.

  • Surge
    PROBABLY NO LONGER AVAILABLE.

    Surge Forward is looking for a smart, self-motivated, experienced, senior-level remote developer to work as a long-term independent contractor.

    Experience Required: 

    Senior Data Engineer Openings

    Must be located in the US or Canada to be considered for this role. Sorry, No Visas.

    For immediate consideration, email resume with tech stack under each job and include what versions of Angular you have coded in (directly on the resume) as well as cell phone number and start date.

  • Surge
    PROBABLY NO LONGER AVAILABLE.

    Surge Forward is looking for smart, self-motivated, experienced, senior-level consultants who enjoy the freedom of telecommuting and flexible schedules, to work as long-term, consistent (40 hrs/week) independent contractors on a variety of software development projects.

    Experience Required:

    Looking for a contracted database administrator/developer resource. This resource will be initially focused on building executive targeted reports out of client's cable service provider billing system. Long term they would like to use this resource to help develop an analytical data store/data mart off of the billing system data.

    Skills Required: Initially
    Strong SQL (SQL Server) Knowledge to generate data sources off of the source system
    Building reports with SQL Server Reporting Services (SSRS)
    Building reports with Tableau

    Skills Required: Long Term
    Familiar with data warehousing/building data marts
    ETL via SQL Server Integration Services (SSIS)

    Must be located in the US or Canada to be considered for this role. Sorry, No Visas.

    Resume must include the tech stack under each job.

    For immediate consideration, email resume and include your cell phone number and start date.