About Me

Senior Software Engineer with 10+ years of experience pivoting into Data Engineering, bringing deep expertise in ETL pipelines, data transformation, and infrastructure. Proven track record building data ingestion systems processing tens of thousands of records, designing automated data processing workflows on AWS (S3, Lambda), and containerized ML training pipelines with Docker.

Experienced in Python, SQL optimization, and full-stack development across sports technology, healthcare, and consulting. Currently building open-source quantitative data transformation tooling.

Core Competencies

Data Engineering & ETL

  • Building data ingestion pipelines processing tens of thousands of records
  • ETL workflows, data transformation, data modeling, and migration
  • Web scraping (Puppeteer), statistical analysis, and database optimization

Cloud & Infrastructure

  • AWS (S3, Lambda, CDN) — automated data processing workflows and cron jobs
  • Docker — containerized ML training pipelines
  • Git/GitHub, Vercel — CI/CD and deployment

ML & Data Analytics

  • TensorFlow, Keras — neural network training and data preprocessing
  • D3.js data visualization dashboards for front-office analytics
  • Statistical modeling and feature engineering for real-world domains

Full-Stack Development

  • Frontend: React.js, Next.js — data-rich interfaces and dashboards
  • Backend: Node.js, Python, Java — APIs, services, and server-side logic
  • Databases: MySQL (query optimization, indexing), MongoDB

Quality Engineering & Test Automation

  • Selenium, Playwright, Puppeteer — automated testing at the UI and API level
  • Establishing QA standards and automation frameworks for enterprise clients
  • Data reporting and analysis to identify inconsistencies and coverage gaps

Technical Skills

Data Engineering ETL Pipelines, Data Transformation, Data Modeling, Data Migration, Web Scraping (Puppeteer), Statistical Analysis, Database Optimization
Languages & SQL Python, SQL (MySQL — query optimization, indexing, table restructuring), JavaScript/TypeScript, Java, C++, C#
Cloud & Infrastructure AWS (S3, Lambda, CDN), Docker, Git/GitHub, Vercel
ML & Analytics TensorFlow, Keras, Neural Network Training, Data Preprocessing for ML
Frameworks & Tools React.js, Next.js, Node.js, Selenium, Playwright, MongoDB, Salesforce (Apex, SOQL, Lightning)
Methodologies Agile, Kanban, Software Architecture, Test Automation, Quality Engineering

Professional Experience

Senior Software Engineer

GMTM April 2021 – Present · Remote
  • Architected data ingestion pipelines to scrape, transform, and load tens of thousands of athlete records into the platform’s user system using Puppeteer and Node.js
  • Revived a proprietary scoring algorithm (SPARQ) by collecting raw athletic performance data, standardizing it, and feeding it into a Dockerized neural network — achieving 98% accuracy with new scores falling within 2% of originals
  • Built the Stripe-powered subscription infrastructure from the ground up — billing, plan management, revenue reporting, and a feature-gating access control layer determining what purchased products users can access
  • Owned the end-to-end user onboarding experience; built analytics dashboards to track conversion rates and identify drop-off points, optimizing for activation and retention
  • Generated monthly revenue and revenue-sharing reports via Stripe, providing financial data visibility to stakeholders
  • Developed front-office data visualization dashboards using D3.js to display SPARQ athletic performance data on curves, enabling coaches and scouts to analyze athlete metrics
  • Designed AWS Lambda cron jobs to roll up large datasets into optimized summary tables, reducing storage costs
  • Built and maintained the platform’s API infrastructure on AWS S3, handling all backend traffic
  • Supported and maintained RockyRoad, an internal serverless video transcoding system — handled user-uploaded athlete highlight videos on the front end, enforcing format and resolution standards and loading to S3, where a chain of AWS Lambda functions then segmented videos, transcoded segments in parallel via FFmpeg, and reassembled them for platform delivery
  • Performed MySQL query optimization, foreign key optimization, and table restructuring to improve database performance

Python, Node.js, React.js, Next.js, D3.js, AWS (S3, Lambda, CDN), Docker, MySQL, TensorFlow, Keras, Puppeteer, Stripe API, FFmpeg

Senior Quality Analyst

Accenture August 2019 – April 2021 · Chattanooga, TN
  • Built automated reporting systems that tracked success and failure rates over time, identifying data inconsistencies and gaps in test coverage
  • Led automated testing initiatives using Selenium and Java for enterprise clients
  • Established quality assurance standards and data analysis protocols across multiple projects

Selenium, Java, Agile, Data Reporting & Analysis

Solutions Engineer

Position5 (Freelance) February 2019 – August 2019 · Chattanooga, TN
  • Delivered custom data-driven technical solutions for diverse business clients using Python, Node.js, and React.js
  • Provided solution architecture and application development expertise

Software Quality Engineer

CodeScience May 2017 – February 2019 · Chattanooga, TN
  • Executed data migration and ETL processes, moving customer and sensitive data between relational database versions for SaaS clients using Node.js, MongoDB, and Pentaho
  • Led Salesforce development including Apex, SOQL, Triggers, and Lightning App Development
  • Performed test automation using Java and Node/Puppeteer; collaborated with clients to refine project scope and create user stories

Node.js, MongoDB, Pentaho, Salesforce (Apex, SOQL), Java, ETL

Software Engineer

BlueCross BlueShield of Tennessee June 2015 – May 2017 · Chattanooga, TN
  • Developed healthcare management applications using Java Enterprise Edition and IBM WebSphere
  • Built full-stack solutions with RESTful & SOAP services, MySQL, and front-end technologies

Java EE, MySQL, RESTful & SOAP Services, IBM WebSphere

Data Engineering Projects

Arcana — Open-Source Python Package  GitHub

  • Built a Python package that queries the Coinbase API for real-time cryptocurrency transaction data and transforms it into quantitative trading bars (based on Marcos Lopez de Prado’s methodologies)
  • Designed a productized data pipeline for ingesting, transforming, and serving financial time-series data for consumption by trading systems

Python, Coinbase API, Real-Time Data Ingestion, Data Transformation

Sigil — Data Enrichment Pipeline

In Development
  • Building a data enrichment layer on top of Arcana’s output, implementing meta-labeling, sample weighting, and fractional differentiation techniques from Chapters 3–5 of Advances in Financial Machine Learning
  • Enriching financial time-series data produced by Arcana with features designed for downstream ML consumption and automated trading signal generation

Education

Bachelor of Applied Science (BASc), Computer Science

Austin Peay State University · 2011 – 2015

Languages

  • EnglishNative
  • SpanishFluent
  • GermanElementary

Volunteer Work

Board Member & Treasurer

Scenic City Clay Arts · June 2022 – June 2025

  • Provided financial oversight and built custom data visualizations for the executive director to support strategic decision-making
  • Applied mathematical modeling to set realistic donation income targets and advocated for improved reporting systems to deliver actionable financial data to the board

Campaign Data Strategist

Hamilton County Circuit Court Judge Campaign · 2024

  • Acquired and processed local voter data to identify key locations for campaign outreach; built targeting models that contributed to improving Democratic voter turnout from ~24% to ~36% in the district

Site Builder & Foreman Assistant

Habitat for Humanity International · March 2014 – Present

  • Construction projects in Trinidad and Tobago, contributing to housing development and poverty alleviation initiatives

Let's Connect

I'm open to opportunities in data engineering, software engineering, and ML/AI roles — especially where I can build the pipelines and infrastructure that turn raw data into real business value.