628 Data Engineer jobs in Canada
Data Engineer
Posted today
Job Viewed
Job Descriptions
- Have worked on one or more MDM projects
- Have strong functional knowledge on reference data
- Python, Data Bricks, ADF, SQL, Kafka, Snowflake, Airflow, GIT
- Good communication skills
- Financial/Investment banking experience
Data Engineer
Posted today
Job Viewed
Job Descriptions
Charger logistics Inc. is a world- class asset-based carrier with locations across North America. With over 20 years of experience providing the best logistics solutions, Charger logistics has transformed into a world-class transport provider and continue to grow.
Charger logistics invests time and support into its employees to provide them with the room to learn and grow their expertise and work their way up. We are entrepreneurial-minded organization that welcomes and support individual idea and strategies. We are seeking an experienced Data Engineer with strong expertise in DB2 database systems to join our data engineering team. The ideal candidate will have deep SQL knowledge, Python programming skills, and proven experience in data transformation processes to support our enterprise data initiatives.
Responsibilities:
- Design, develop, and maintain robust data pipelines using DB2 as the primary database platform
- Perform complex data transformations to convert raw data into analytics-ready formats
- Write and optimize advanced SQL queries for data extraction, manipulation, and loading processes
- Develop Python scripts and applications for data processing, automation, and pipeline orchestration
- Collaborate with data analysts and business stakeholders to understand data requirements and deliver solutions
- Implement data quality checks and validation processes to ensure data integrity
- Monitor and troubleshoot data pipeline performance, implementing optimizations as needed
- Document data flows, transformations, and technical processes
- Support data migration projects and system integrations involving DB2 databases
Required Qualifications:
- Bachelor's degree in Computer Science, Engineering, Mathematics, or related field
- 3+ years of hands-on experience with IBM DB2 database administration and development
- Expert-level SQL skills including complex joins, window functions, stored procedures, and performance tuning
- Strong Python programming experience with data processing libraries (pandas, NumPy, SQLAlchemy)
- Proven experience in data transformation techniques including ETL/ELT processes
- Experience with data modeling and database design principles
- Knowledge of data warehousing concepts and dimensional modeling
- Familiarity with version control systems (Git) and CI/CD practices
Preferred Qualifications:
- Experience with additional databases (PostgreSQL, Oracle, SQL Server)
- Knowledge of cloud platforms (AWS, Azure, GCP) and their data services
- Familiarity with workflow orchestration tools (Apache Airflow, Luigi)
- Experience with big data technologies (Spark, Hadoop ecosystem)
- Understanding of data governance and compliance requirements
- Professional certifications in DB2 or cloud data platforms
Technical Skills:
- Database: IBM DB2 (required), SQL Server, Oracle, PostgreSQL
- Programming: Python (pandas, NumPy, SQLAlchemy, pytest)
- Tools: Git, Docker, Jenkins, data pipeline orchestration tools
- Concepts: ETL/ELT, data warehousing, data modeling, performance optimization
- Competitive Salary
- Healthcare Benefits Package
- Career Growth
Data Engineer
Posted today
Job Viewed
Job Descriptions
Department:
Information Technology
Location:
Canada-CHQ-Ontario-Toronto
Description
This position is ideal for mid-level data engineers to join the Global Data and Analytics team as a Data Engineer, assisting in building and maintaining cloud-native data analytics and machine learning solutions in the field of orthodontics. Partnering with cross-functional teams, you'll contribute to building out our clinical data lake and work with data analysts, ML engineers to implement various data solutions. This role is vital in building Align Technology's Data Platform and providing clinical insights.
Role expectations
- Write code. This is an individual contributor engineering role.
- Analyze business requirements, design, and document solution architecture for Data Products.
- Create and maintain cloud-native analytics data pipelines for Align Data Platform.
- Develop unit-tests, ensure code coverage, and write technical documentation.
- Monitor production data pipelines and proactively identify issues
- Work with development teams to integrate data products with other applications.
- Work in Agile/Scrum teams, participate in scrum ceremonies including sprint planning, sprint grooming, daily stand-ups, and sprint review.
What We're Looking For
- Education: Bachelor's or Master's degree in Computer Science.
- Experience: 2+ years' experience working as a software engineer, 2+ years of Python programming experience (Scala as a plus), 2+ years of working with databases and writing SQL queries, 1+ year of working with Apache Spark, pySpark; experience working with Databricks platform preferred, application of DevOps best practices, working in Agile environment, working with Databricks preferred.
- Skills: Python, SQL programming, knowledge of Data Lake design practices, knowledge of DevOps best practices, understanding of full-scale enterprise SDLC process, familiarity with Atlassian tools (JIRA, Confluence).
- Teamwork: ability to work both independently and collaboratively.
Complementary skills
- Databricks platform, Delta tables, Apache Spark, pySpark, Python, SQL programming experience with SAP, Business Objects BI Reporting, and data visualization tools.
- Excellent team work skills.
Pay Transparency
If provided, base salary or wage rate ranges are the range in which Align reasonably expects to set a candidate's pay for the posted position. Actual placement depends on the individual skills and experience level of a candidate plus the total compensation and equity across team members. For other locations outside of the primary location, the base salary range will be adjusted geographically.
For Field Sales roles, the salary listed is the base pay only and does not include the applicable incentive compensation plan. A cost of living adjustment may be added to base pay for higher cost areas in the U.S.
Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.
Applicant Privacy Policy
Review our
Applicant Privacy Policy
for additional information.
Equal Opportunity Statement
Align Technology is an equal opportunity employer. We are committed to providing equal employment opportunities in all our practices, without regard to race, color, religion, sex, national origin, ancestry, marital status, protected veteran status, age, disability, sexual orientation, gender identity or expression, or any other legally protected category. Applicants must be legally authorized to work in the country for which they are applying, and employment eligibility will be verified as a condition of hire.
Data Engineer
Posted today
Job Viewed
Job Descriptions
Job Title: Data Engineer
Location: Remote Role
Interview: Video Interview
Description:
Provides thought leadership in modern data platforms, integration, and analytics; shapes vision, standards, and best practices for Microsoft Fabric, Informatica IICS, and enterprise reporting. Drives outcomes end-to-end with minimal direction: discovery, assessment, target-state architecture, roadmap, implementation, production readiness, and transition to operations. Drives Medallion (Bronze/Silver/Gold) architecture on Microsoft Fabric/OneLake: Lakehouse/Warehouse design, Delta/Parquet standards, partitioning/layout, workspace/capacity strategy, performance and cost guardrails. Drives Informatica IICS architecture and delivery: connector strategy (e.g., SAP, Salesforce, Oracle), CDC and pushdown optimization, resilient CDI mappings/orchestrations, error handling/retries, and operational monitoring. Drives robust ETL/ELT engineering across Fabric Pipelines/Notebooks/Dataflows and IICS CDI: metadata-driven patterns, parameterization, schema evolution, CDC/SCD, idempotency, observability, and SLA/SLO adherence. Drives API-led integration strategy: OpenAPI-first design, versioning, OAuth2/OIDC and mTLS/JWT security, throttling/caching, and mediation using Azure API Management and/or IICS API Manager; implements API analytics and alerting. Drives governance and security by design: Microsoft Purview catalog/lineage, sensitivity labeling, RBAC/PIM, Key Vault secrets, private endpoints, encryption, and data masking/privacy controls aligned to compliance needs. Drives stakeholder alignment and decision-making: frames options and trade-offs, maintains decision records, manages risks/issues/dependencies, and communicates clearly to executive and technical audiences.
Key responsibilities
Own end-to-end solution delivery: discovery, assessment, target architecture, roadmap, estimates, and production rollout; proactively identify risks and drive decisions.
Architect and implement Medallion data platforms in Fabric: Lakehouse/Warehouse design, partitioning, Delta Lake, CDC/SCD, performance and cost governance, capacity/workspace strategy.
Design and build robust pipelines: ingestion and transformations in Fabric (Pipelines, Dataflows Gen2, Notebooks) and/or IICS CDI; ensure reliability, observability, and SLAs.
Lead Informatica Cloud solutioning: integrations and orchestrations, API creation/publishing, connector selection, security, monitoring, and error handling at scale.
Establish standards and guardrails: coding conventions, data modeling patterns, security, quality gates, and review processes; implement Purview-based governance and lineage.
Implement DevOps and automation: version control, CI/CD for Fabric/IICS/Power BI, environment strategy, test automation, and IaC for repeatable deployments.
Partner with stakeholders to prioritize use cases, sequence deliveries, and measure outcomes; create clear design docs and present to exec/technical audiences.
Mentor engineers and lead design reviews; uplift team capabilities in Fabric, Informatica, ETL/ELT, and analytics.
Must-have experience and skills:
10–15 years total in data architecture, integration, and analytics delivering enterprise-grade platforms.
3–5 years hands-on with Microsoft Fabric: OneLake, Lakehouse/Warehouse, Data Factory/Pipelines, Dataflows Gen2, Notebooks (PySpark), capacities/workspaces, and Lakehouse Medallion patterns using Delta/Parquet.
5–8 years on Azure data platform services: ADLS, Azure AD/Entra ID, Key Vault, Azure API Management, Event Hubs/Functions/Logic Apps for integrations; strong grasp of security, networking, and cost governance.
3–5 years with Informatica IICS: Cloud Data Integration (CDI) mappings/tasks, Cloud Application Integration, API Manager; CDC, pushdown optimization, high-throughput patterns, error handling/retries.
6–10 years strong, hands-on ETL/ELT: SQL and PySpark, performance tuning, partitioning, schema evolution, SCD, orchestration, idempotency, SLAs/SLOs.
Proven end-to-end assessments (3+ engagements) covering: current-state discovery, gap analysis, and target-state architecture in Fabric; Informatica ETL/ELT and API integration patterns; governance/security; migration and adoption roadmaps with estimates/TCO.
API integration expertise: REST/OpenAPI-first design, OAuth2/OIDC, mTLS/JWT, throttling, caching, monitoring; publishing/mediating APIs via Azure API Management and/or IICS API Manager.
Architecture leadership: creates clear design docs, reference architectures, and decision records; communicates trade-offs to executives and engineers; mentors/upskills teams. Operates autonomously and drives decisions without waiting for direction.
Good-to-have experience:
Streaming and near-real-time patterns: Event Hubs/Kafka, change data capture to Lakehouse, Kappa/Lambda designs.
Advanced data modeling: dimensional/star, data vault 2.0, domain-oriented data product design.
Analytics enablement: Power BI governance at scale (deployment pipelines, certified datasets, semantic link, scorecards), Fabric Warehouse optimization.
Observability: Fabric monitoring hub, Log Analytics, Application Insights; cost/performance tuning and FinOps practices.
Master data and reference data management; data quality frameworks and test automation.
Regulated data (PII/PHI) and industry compliance (HIPAA, GDPR, SOX).
Workload migration: on-prem/other-cloud to Fabric/IICS, Synapse or Databricks to Fabric Lakehouse.
Certifications: Microsoft Fabric Analytics Engineer Associate or Azure Data Engineer Location/travel
Flexible (remote/hybrid);
occasional travel for workshops and stakeholder sessions.
Data Engineer
Posted today
Job Viewed
Job Descriptions
Position Overview:*
As a Data Engineer, you will be responsible for building and maintaining pipelines, ensuring data quality, and supporting governance and compliance across the enterprise lakehouse. You'll collaborate with data scientists, analysts, and business teams to deliver reliable and accessible data products. This role is ideal for someone with a few years of engineering experience who is ready to grow technical depth and platform ownership.
Key responsibilities include:
- Develop and maintain robust ingestion pipelines for structured and unstructured data.
- Implement transformations and modeling in the medallion lakehouse framework.
- Apply data quality checks and monitoring for pipeline health.
- Contribute to metadata, lineage, and cataloging efforts.
- Support security/privacy controls including role-based access and encryption.
- Collaborate with stakeholders to deliver BI and AI-ready datasets.
- Participate in code reviews and follow engineering best practices (CI/CD, IaC, testing).
The ideal candidate will possess:
- 2–5 years of experience in data engineering or related roles.
- Strong SQL and Python programming skills.
- Experience with modern platforms (e.g., Databricks, Azure, cloud storage).
- Familiarity with ingestion from APIs, Salesforce, and on-prem databases.
- Understanding of data governance, quality, and observability practices.
- Exposure to CI/CD and infrastructure automation tools.
- Strong problem-solving and communication skills.
- Bachelor's degree in Computer Science, Engineering, Mathematics, or related field.
Condition of employment:
As a condition of employment and in order to comply with industry related data security standards, this position is subject to the successful completion of a Criminal Background Check. Details will be supplied to applicants as they move through the selection process.
Xplore is committed to creating an accessible environment and will accommodate disabilities during the selection process. Please let your recruiter know during the selection process of any accommodation needs.
Company Overview:
Xplore Inc. is Canada's fibre, 5G and satellite broadband company for rural living. Xplore is committed to the relentless pursuit of an improved broadband experience for all Canadians. Xplore is building a world-class fibre optic and 5G wireless network to enable innovative broadband services for better every day rural living, for today and future generations.
Data Engineer
Posted today
Job Viewed
Job Descriptions
Description
Are you passionate about building robust data pipelines and transforming raw data into well-structured, high-quality insights? We're seeking a Data Engineer to join our dynamic data team and help shape the future of our enterprise data lakehouse on Azure and Databricks.
In this role, you will design, build, and maintain ingestion pipelines from multiple source systems into our medallion architecture (bronze, silver, gold layers). Your work will span Python development, SQL transformations, and data modeling, ensuring data is accurate, performant, and business ready.
You'll work hands-on optimizing ELT processes, ensuring data quality and performance, and collaborating across teams—from security to data architecture—to deliver scalable, secure, and efficient solutions. You'll also support the development of data models, implement both batch and selective real-time processing, and contribute to a strong testing and documentation culture.
Apply today to join our True-Blue team
Essential Responsibilities:
- Build, maintain, and support the ingestion of data from source systems (e.g., ERP, CRM, HR) into the Databricks lakehouse
- Monitor and optimize performance and quality of data transfers across medallion layers
- Collaborate with Information Security on securing pipelines and access controls
- Set up schedules and triggers for full and incremental ingestion using Azure Data Factory and Databricks
- Design, build, and test data pipelines using Python, SQL, and Databricks notebooks
- Profile source data to assess structure, quality, and relationships
- Work with the Data Architect to define and implement data models (schemas, tables, relationships, and mapping logic)
- Develop automated testing strategies for data quality, completeness, and performance (unit, integration, and regression testing)
- Build ELT processes to transform raw data into curated datasets within Databricks
- Implement batch-oriented solutions, with selective adoption of near real-time pipelines
- Apply partitioning, indexing, and optimization techniques for scalable storage and query performance
- Create documentation, runbooks, and training materials to support adoption and knowledge sharing
- Establish methods to track and improve data quality, completeness, and consistency
Qualifications:
- 4–6 years of relevant data engineering experience.
- Bachelor's degree in computer science, Information Management, or related field (or equivalent experience)
- Proficiency in Python and advanced SQL is a must have
- Experience with Azure Data Factory, Databricks (Delta Lake), and Git/GitHub.
- Strong data modeling skills (e.g., star schema, Data Vault, normalization/denormalization)
- Familiarity with Agile delivery, DevOps, and CI/CD practices for data pipelines
- Strong analytical, problem-solving, and collaboration skills
- Experience with Snowflake, dbt, or other modern data stack tools is an asset
- Familiarity with data governance and metadata management tools is nice to have
- Exposure to event streaming (Kafka, Event Hub) is nice to have
Working Conditions:
- This is a hybrid position with remote flexibility available
Additional Information
The Ledcor Group of Companies is one of North America's most diversified construction companies. Ledcor is a company built on a rich history of long-standing project successes.
Our workplace culture has been recognized as one of Canada's Best Diversity Employers, Canada's Most Admired Corporate Cultures, and a Top 100 Inspiring Workplace in North America.
Our competitive total rewards package provides compensation and benefits that support your physical, mental and financial wellbeing. We offer exciting, challenging work with opportunities to develop your skills and knowledge.
Employment Equity
At Ledcor we believe diversity, equity, and inclusion should be part of everything we do. We are proud to be an equal-opportunity employer. All qualified individuals, regardless of race, color, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, Veteran status or any other identifying characteristic are encouraged to apply.
Our True Blue team consists of individuals from all backgrounds who contribute diverse perspectives and experiences to Ledcor. We are committed to continuing to build on our culture of empowerment, inclusion and belonging.
Adjustments will be provided in all parts of our hiring process. Applicants need to make their needs known in advance by submitting a request via
email
. For more information about Ledcor's Inclusion and Diversity initiatives, please visit our
I&D page
.
st Street SE, Calgary, AB
Data Engineer
Posted today
Job Viewed
Job Descriptions
Role: Data Engineer
Location: Remote
Description:
We are seeking talented and versatile Data Engineer(s) to join our dynamic team. The ideal candidate(s) will have a strong foundation in data engineering practices, combined with the analytical skills necessary to derive actionable insights from data. This role involves designing, implementing, and maintaining robust data pipelines and architectures, as well as performing detailed data analysis to support business decisions.
Data Engineering:
• Design, build, and maintain data pipelines on-premises and in the cloud (Azure, GCP, AWS) to ingest, transform, and store large datasets. Ensure pipelines are reliable and support multiple business use cases.
• Create and optimize dimensional models (star/snowflake) to improve query performance and reporting. Ensure models are consistent, scalable, and easy for analysts to use.
• Integrate data from SQL, NoSQL, APIs, and files while maintaining accuracy and completeness. Apply validation checks and monitoring to ensure high-quality data.
• Improve ETL/ELT processes for efficiency and scalability. Redesign workflows to remove bottlenecks and handle large, disconnected datasets.
• Build and maintain end-to-end ETL/ELT pipelines with SSIS and Azure Data Factory. Implement error handling, logging, and scheduling for dependable operations.
• Automate deployment, testing, and monitoring of ETL workflows through CI/CD pipelines. Integrate releases into regular deployment cycles for faster, safer updates.
• Manage data lakes and warehouses with proper governance. Apply security best practices, including access controls and encryption.
• Partner with engineers, analysts, and stakeholders to translate requirements into solutions. Prepare curated data marts and fact/dimension tables to support self-service analytics.
Data Analytics:
• Analyse datasets to identify trends, patterns, and anomalies. Use statistical methods, DAX, Python, and R to generate insights that inform business strategies.
• Develop interactive dashboards and reports in Power BI using DAX for calculated columns and measures. Track key performance metrics, share service dashboards, and present results effectively.
• Build predictive or descriptive models using statistical, Python, or R-based machine learning methods. Design and integrate data models to improve service delivery.
• Present findings to non-technical audiences in clear, actionable terms. Translate complex data into business-focused insights and recommendations.
• Deliver analytics solutions iteratively in an Agile environment. Mentor teams to enhance analytics fluency and support self-service capabilities.
• Provide data-driven evidence to guide corporate priorities. Ensure strategies and initiatives are backed by strong analysis, visualizations, and models.
Facilities
Data Engineer(s) shall be responsible for providing all of their equipment, including computers, software, printers, supplies, desks, and chairs. However, the Province shall ensure that the Data Engineer(s) have the necessary access and credentials to the GoA system.
If the Data Engineer(s) are directed to work in-person, the Province shall provide the requisite office space, furniture and office supplies. However, the Data Engineer(s) shall continue to be responsible for providing computers and software and the Province shall continue to ensure that the Data Engineer(s) have the necessary access and credentials to the GoA system.
Be The First To Know
About The Latest Data engineer Jobs in Canada!
Data Engineer
Posted today
Job Viewed
Job Descriptions
You will work as a Data Engineer in Trading Technology, Public Market Investments team, partnering with traders, investment professionals and operations staff to design and implement solutions enabling trading and post trade activities. You will be responsible for hands on development of analytical solutions covering several asset classes including equities, fixed income, derivatives, OTC and FX. Through close partnership with traders, investment professionals and operations, you will see firsthand how your software is impacting trade activities
Responsibilities
Own aspect of designing, building and maintaining a scalable and efficient cloud base data platform to meets the needs of trading analytics.
Help implement processes and tools to monitor and improve data quality.
Model and efficiently store financial trading data for use in analytics.
Create procedures to ensure best practices are being met in the software development process.
Liaise with technical and business individuals who may be internal staff or external vendors towards the completion of projects.
Create solutions tailored to business requirements aligned with the long-term architecture and technology strategy using Amazon Web Services (AWS) for Cloud development.
Provide knowledge
Data Engineer
Posted today
Job Viewed
Job Descriptions
Charger logistics Inc. is a world- class asset-based carrier with locations across North America. With over 20 years of experience providing the best logistics solutions, Charger logistics has transformed into a world-class transport provider and continue to grow.
Charger logistics invests time and support into its employees to provide them with the room to learn and grow their expertise and work their way up. We are entrepreneurial-minded organization that welcomes and support individual idea and strategies. We are seeking a skilled
Data Engineer with strong DBT (data build tool)
experience to join our modern data stack team. The successful candidate will leverage DBT, Python, and SQL expertise to build scalable, maintainable data transformation pipelines that power our analytics and business intelligence initiatives.
Responsibilities:
- Develop and maintain data transformation models using DBT for scalable analytics workflows
- Build reusable, modular SQL transformations following DBT best practices and software engineering principles
- Implement data quality tests and documentation within DBT framework
- Design and optimize complex SQL queries for data modeling and transformation
- Create Python applications for data ingestion, API integrations, and pipeline orchestration
- Collaborate with analytics teams to translate business requirements into robust data models
- Implement version control workflows and CI/CD processes for DBT projects
- Monitor data pipeline performance and implement optimization strategies
- Establish data lineage tracking and impact analysis using DBT's built-in capabilities
- Mentor team members on DBT development patterns and SQL optimization techniques
Requirements
Required Qualifications:
- Bachelor's degree in Computer Science, Engineering, Data Science, or related field
- 2+ years of hands-on experience with DBT (data build tool) development and deployment
- Expert-level SQL skills including CTEs, window functions, and advanced analytical queries
- Strong Python programming experience, particularly for data processing and automation
- Experience with modern data warehouses (Snowflake, BigQuery, Redshift, or Databricks)
- Solid understanding of dimensional modeling and data warehouse design patterns
- Experience with version control (Git) and collaborative development workflows
- Knowledge of data testing strategies and data quality frameworks
Preferred Qualifications:
- DBT certification or demonstrated advanced DBT knowledge
- Experience with cloud data platforms and their native services
- Familiarity with workflow orchestration tools (Airflow, Prefect, Dagster)
- Knowledge of data visualization tools (Looker, Tableau, Power BI)
- Experience with streaming data processing frameworks
- Understanding of DataOps and analytics engineering principles
- Experience with Infrastructure as Code (Terraform, CloudFormation)
Technical Skills:
- DBT
: Model development, testing, documentation, macros, packages, deployment - SQL
: Advanced querying, performance optimization, data modeling - Python
: pandas, SQLAlchemy, requests, data pipeline frameworks - Data Warehouses
: Snowflake, BigQuery, Redshift, or similar cloud platforms - Tools
: Git, Docker, CI/CD pipelines, data orchestration platforms - Concepts
: Dimensional modeling, data testing, analytics engineering, DataOps
What You'll Build:
- Scalable DBT models that transform raw data into analytics-ready datasets
- Automated data quality tests and monitoring systems
- Self-documenting data pipelines with clear lineage and dependencies
- Reusable data transformation components and macros
- Robust CI/CD workflows for data model deployment
Benefits
- Competitive Salary
- Healthcare Benefits Package
- Career Growth
Data Engineer
Posted today
Job Viewed
Job Descriptions
Atimi is on the lookout for a talented Data Engineer (DE) to join our dynamic team. This role involves working with innovative technologies to build robust data infrastructure that powers analytics and insights for our clients. If you are a self-starter with a passion for transforming data into action, Atimi is the place for you
Requirements
Basic qualifications:
- Bachelor's degree in Computer Science, Engineering, or related field
- 2+ years of experience in data engineering or a related field
- Proficient in SQL and data modeling
- Experience with cloud technologies, preferably AWS
- Familiarity with ETL processes and data pipeline construction
- Understanding of scripting languages such as Python or Bash
- Good problem-solving skills and attention to detail
- Ability to work collaboratively with cross-functional teams
- Strong communication skills (written and verbal)
This position can accommodate both on-site and remote work arrangements.
Interested candidates are encouraged to submit their resume and cover letter for consideration. We look forward to hearing from you
Benefits
WHO WE ARE
Hi there—we're Atimi. You may not know us by name, but if you have a smartphone in your pocket or a computer on your desk, chances are you're familiar with our work. We partner with innovative, consumer-focused companies (think Fortune 500 and similar) who want to achieve better connections and market differentiation with premium mobile apps. We were founded in 2000 and quickly came to be known as app developers who handle complex projects and deliver high quality work. Our deep expertise reaches back to the dawn of mobile apps when we built 3 of the first 100 apps in Apple's iStore. Since then, we've continued to work on flagship projects that get noticed: over 60% of our apps have been featured by Apple in TV ads, iTunes advertising, in-store or in print ads. And we're just getting started.