Download PDF

Summary

The core philosophy I adhere to stems from a comprehensive architectural experience, meticulously cultivated from the ground up, embracing systems, backend and frontend software, networking, database, and security. My professional journey is reflective of this commitment to in-depth knowledge, offering a holistic understanding and mentorship to fulfill overarching product objectives.

Technology Development:

  • Edge Computing & Infrastructure: With a foundation in early ISPs and satellite services, my edge computing experience is vast, having driven initiatives like edge-level caching, site synchronization, and optimized traffic routes, predominantly on Linux-based systems.

  • Comprehensive Linux Engineering: Specializing in data centers, embedded solution design, and network appliance development, ensuring optimal and scalable systems performance.

  • Scalable and Efficient Design: Proficient in creating scalable designs in processing queues, process analysis, ingestion, and index management, facilitating smooth and efficient operations.

  • Database Development: Implementing precise database-specific software development practices to guarantee transactional integrity and atomicity, even in the context of working with eventually consistent data streams.

  • Advanced Cloud Architectures: Crafting innovative cloud architectures employing serverless functions, containers, scalable databases, and custom service integrations.

  • Hardware Management & IoT Device Development: Comprehensive low-level hardware management, solution design, and development of IoT devices, ensuring the seamless integration of technology at every level.

  • Full-stack Development: Front and backend development capabilities in customer portals, social media software, billing and invoicing systems demonstrate a well-rounded skill set in software development.

  • Platform Engineering, CI/CD & Large Scale Telemetry: A proponent of a holistic SDLC approach, I've specialized in CI/CD with an emphasis on code safety, secrets management, and secure artifact creation, leveraging tools like SaltStack, Jenkins, Terraform, and Kubernetes. Expertise in monitoring, and large-scale telemetry eventing and cataloging ensures a seamless and effective development lifecycle and robust data management.

  • Image & Video Processing: Proficient in image and video ingest, processing, publishing, and analysis pipelines, allowing for comprehensive multimedia management.

Community Contribution & Industry Experience:

  • A longstanding history with early ISPs and satellite services, advocating for optimized site-to-site solutions and caching mechanisms.

  • A staunch advocate and contributor to Free and Open Source software, actively promoting its growth and development, thereby enriching the technological community.

  • Multiple roles encompassing TechOps, DevOps, CodeOps, and NetOps demonstrate versatility and the ability to adapt to varied operational needs, often leading as a ‘BossOps.'

  • Entrepreneurial initiatives in developer experience, systems stability, and remote access reflect a proactive approach to industry developments and a passion for enhancing user and developer experiences.

Leadership & Mentoring:

I firmly believe in the power of learning through teaching and leading by example. It is crucial to make space for both, as they are the building blocks of progressive knowledge dissemination and collective growth within any team or organization. My commitment to these values is reflected in the mentoring and guidance provided to peers and subordinates to foster a collaborative and learning-centric environment.

This multifaceted experience, coupled with a steadfast commitment to technology development and community contribution, offers a balanced and enriched perspective, promoting the realization of product goals through knowledge sharing, innovative solutions, and industry-best practices. By synthesizing technological proficiency with a passion for community enrichment and leadership, I aim to drive forward both organizational and industry-wide progress.

Volunteer Efforts:

As a lead engineer at the Global Centré of Risk and Innovation (GCRI), I work alongside a team of dedicated volunteers to organize Nexus Hackathons. These hackathons, aligned with UN frameworks, focus on creating solutions that enhance quality of life, security, and provide civic responses to global challenges. My role extends beyond the GCRI, as I also host grassroots hackathons to support individuals pursuing career shifts or engaging in civic projects, emphasizing mentorship and community involvement in technology for social impact.

Work experience

Department of Veteran Affairs / DocMe360 / Brute Technologies
11-2023Current

Senior Software Development Engineer / Data Science and Quality Assurance

Full Time - Remote, US

In the Clinical Decision Support group (CDS) at the VA, my responsibilities included developing a software pipeline that integrated MDClone and MITRE Synthea with specifically tailored Python software meeting the projects specific needs for longitudinal synthetic patient data updates with manual tuning for specific population needs.

My role in quality assurance and testing platform development involved mocking complex services, data warehouse syncronization, and developing NodeJS/React and Python backed dashboard components for efficient test patient management. Collaborating with VistA Developers, I focused on data accuracy in various environments and managed clinical contexts using Smart-on-FHIR and custom integrations.

A key aspect of my work was ensuring compliance with HIPAA, HL7, and VA/Federal regulations for PII and PHI, particularly in data transmission and processing. Additionally, I utilized AWS Government Cloud for secure data management.

Part of my role also entailed learning the ins and outs of government contracting under the PTEMS contract, gaining valuable insights into the specific requirements and processes of government projects.

Furthermore, I was responsible for developing API and end-user documentation, contributing to knowledge management and ensuring clear communication of information.

My responsibilities also included implementing security protocols and engaging in AI/ML healthcare projects, requiring an independent and adaptable work approach.

Requirements:

  • Experience with software pipeline development, including Python for data processing.
  • Working directly with Apache Kafka, AWS Lambda, AWS Fargate, Github Actions, and
    much of the AWS suite of services.
  • Quality assurance expertise, particularly in API mocking and dashboard development.
  • Proficiency in data science for synthetic patient data management.
  • Collaboration with VistA Developers and experience with Smart-on-FHIR integrations.
  • Learned and implemented database models similar to InterSystems IRIS to better comprehend
    multi-dimensional array storage and fast indexing for performant analytics.
  • Began working with Mumps (M).
  • Adherence to HIPAA, HL7, and VA/Federal PII and PHI regulations.
  • Familiarity with AWS Government Cloud and feature sets and limitations within available
    services.
  • Experience in developing API and end-user documentation.
  • Participation in AI/ML healthcare projects.
  • Knowledge of government contracting, specifically under the PTEMS contract.
  • Ability to work independently in dynamic environments.
Adobe, Inc. / TekSystems, Inc.
07-202311-2023

Senior Software Development Engineer & Security Analyst

Full Time - Remote, US

At Adobe, Inc., a leading software developer known for its expansive suite of creative and productivity tools, I played an integral role in enhancing the Software Bill of Materials (SBOM) services. Collaborating closely with the Legal, Security, and Cybersecurity teams, I was deeply involved in building and managing the SBOM services, striving to ensure that they were not only compliant with industry standards but also set new benchmarks.

Central to my contributions was Python development using FastAPI and a custom workflow engine developed by the team. My engagement with the OWASP CycloneDX community enriched my understanding of the intricacies of the specification. This in-depth knowledge proved invaluable when addressing pressing challenges like penetration testing, insider threat analysis, and countering supply chain vulnerabilities, notably dependency confusion malware. Recognizing the importance of collective understanding, I consistently shared this knowledge with my team, fostering a cohesive and well-informed approach.

Managing Adobe's vast repositories was another critical aspect of my role. I architected a configuration engine optimized for git processing that incorporated necessary cybersecurity precautions. Leveraging tools like Docker, Kubernetes, and AKS, I streamlined deployment processes. Additionally, I brought technical expertise across various dimensions, encompassing developer tooling, database design, cybersecurity protocols, and more.

Work with regulated software controls to adhere to cybersecurity standards and initiatives for software produced by Adobe meant for use in secure and sensitive environments.

Requirements:

  • Proficiency in Python development using FastAPI and adherence to OWASP and cybersecurity standards.
  • Expertise in penetration testing, insider threat analysis, and mitigation of supply chain vulnerabilities.
  • Mastery in repository management, including skills with Git, GitHub/GitLab API/Webhook, and Kubernetes.
  • Familiarity with containerization through Docker and cloud services like AWS SQS and RDS.
  • Adeptness in CI/CD, leveraging tools such as CircleCI, ArgoCD, Tilt, and optimized git processing.
  • Comprehensive experience with developer tooling, database design, Jira, Confluence, and cybersecurity practices.
Metify, Inc
01-202305-2023

Principal Software & Hardware Engineer

Full Time - Remote, US

At Metify, an emerging startup known for delivering innovative platform-as-a-service provisioning solutions, I played a key role in advancing the Mojo service. Beginning with a Django-based Proof of Concept (POC) that heavily relied on Docker, we transitioned towards using Debian and Ubuntu packages, targeting stability for both Long-Term Support (LTS) distributions and strategies apt for edge computing distributions. My efforts were focused on API development, Python web services, and managing complex workloads, primarily with Python3.9 and Python3.11.

My role entailed the integration and management of key baseboard components, involving the seamless integration of diverse x86 and ARM servers. This process was underscored by the use of protocols and services such as DNS, DHCP, Discovery Services, TFTP, and HTTP, alongside customized BMC integrations. A critical aspect of my contribution was the development and enhancement of bespoke services essential to the network's functionality and the operational efficiency of our product.

Security was a paramount concern in our operations. Contributing significantly to the Secure Boot initiative for Debian and Ubuntu, I handled the intricacies of working with signed shim, iPXE, and grub2 packages. This task was complemented by developing custom key management strategies using netboot facilities, a move instrumental in amplifying our product's security posture. My focus encompassed a broad spectrum of security measures, including penetration testing and cybersecurity, to safeguard against a variety of threats such as supply chain attacks.

Our project management style embraced a team-centric approach, maximizing the use of tools like Jira and GitHub. By adopting and integrating various Continuous Integration/Continuous Deployment (CI/CD) methodologies, we aspired to maintain a consistent and efficient developmental flow. I was actively involved in sprint management, retrospectives, and influencing the overall trajectory of our projects.

Collaboration with the hardware engineering team was another critical component of my job, aimed at aligning our software initiatives with hardware releases, particularly for data center deployments. This collaboration was vital in ensuring that our software and hardware strategies were cohesively executed for optimal performance and deployment.

Requirements:

  • Expertise in languages and frameworks including Python, Javascript/Typescript, Rust, Django Rest Framework.
  • Proficiency in networking, encompassing IP services, BMC integrations, NGINX, and Debian/Ubuntu packaging.
  • Mastery in various facets of security, including penetration testing, Linux provisioning, Docker, and software supply chain security.
  • Competency in development management tools, notably Jira and GitHub, complemented by experience in CI/CD strategies and thorough documentation.
  • Creation of FedRamp approved EC2 images and AWS MarketPlace catalog items for upcoming cloud initiatives.
Serverless, Inc.
04-202207-2022

Python Telemetry Engineer

Part Time - Remote, US

At Serverless Inc., a leading provider of tools and services tailored for serverless application development, I was actively involved in enhancing their 'console' product. This platform emphasizes metrics and observability, catering to serverless application optimization needs. A significant portion of my contributions revolved around crafting Python libraries in sync with OpenTelemetry. This was driven by an understanding of the unique demands of Javascript workloads within serverless scenarios. An emphasis on asynchronous process design enabled me to achieve more streamlined telemetry grouping and collection, directly impacting the improvement of specification management and introducing new features to the console.

Recognizing the diverse challenges presented in real-world applications from my previous engagements with Serverless technologies, my work consistently integrated practical use cases. This integration ensured a holistic approach, capturing even those nuances that might be overlooked in a more theoretical framework.

Requirements:

  • Proficiency in Serverless architecture and asynchronous process design.
  • Expertise in telemetry grouping and collection, along with a strong foundation in Python and Javascript/Typescript.
  • Hands-on experience with OpenTelemetry and AWS services, highlighting SQS and API Gateway.
  • Publishing telemetry results to intermediate AWS S3 storage.
EveryoneSocial / Gravit
12-202112-2022

Senior Software Engineer & Platform Engineer

Full Time - Remote, US

EveryoneSocial provides a robust social media SaaS solution, catering to businesses eager to amplify their reach through employee advocacy and engagement. Catering predominantly to Fortune-500 companies, the platform has continually evolved to stay ahead of industry needs. As a Platform and Backend Software Engineer at EveryoneSocial, my role was multi-faceted, touching on development, cybersecurity, and optimization.

I played an instrumental role in enhancing our product by delving into the intricacies of microservices and serverless architectures, predominantly leveraging AWS services like Lambda, DynamoDB, AppSync, and the Serverless Framework. Recognizing the importance of data in decision-making, I incorporated AWS Timestream and Athena to bolster our data warehousing capabilities, and simultaneously finessed AWS RedShift operations for optimum data handling. Emphasizing compatibility, I modified OpenTelemetry components to work seamlessly with platforms like DataDog.

User experience being paramount, my contributions in React and React Native were primarily geared towards introducing security enhancements and functional changes, with a notable focus on GraphQL integrations. Further, to streamline interactions with major social media platforms, I designed a custom SQS workflow engine. These technical enhancements were paired with strategic endeavors in Python packaging, leading to the inception of a comprehensive software bill of materials (SBOM), which in turn fortified telemetry, refined event validation, and ensured resource allocation was fluid and efficient.

In the realm of cybersecurity, my endeavors ranged from conducting thorough penetration testing to detailed insider threat analysis. Actively collaborating with security researchers, I ensured a proactive stance, addressing potential vulnerabilities in our platform, rectifying concerns in our Python and JavaScript software, and formulating strategies to safeguard our databases and telemetry from unforeseen breaches.

While technical accomplishments were numerous, the developer experience was not overshadowed. By integrating tools tailored for Python packaging and ensuring software consistency, I facilitated an environment where developers could transition smoothly from ideation to deployment.

This journey with EveryoneSocial was rooted in teamwork and collaboration, continually drawing from team feedback to align and often exceed stakeholder aspirations.

Requirements:

  • Proficiency in AWS tools: Lambda, DynamoDB, Timestream, Athena, AppSync, API Gateway V1/V2, RedShift, CloudFormation, CloudWatch, S3, EC2, Kinesis.
  • Conversion of AWS Glue (Java/Python) data migration and ingestion process to AWS Lambda.
  • Mastery of serverless architectures using Python.
  • Experience in React and React Native development, accentuated with GraphQL integrations.
  • Familiarity with OpenTelemetry and DataDog for advanced telemetry and user analytics.
  • Cybersecurity expertise covering penetration testing, insider threat analysis, and proactive measures against supply chain threats.
  • Skilled in data warehousing, event tracking, and leveraging CycloneDX/SPDX SBOM Tools.
  • Fundamentals with essential developer tools, notably Jira and Confluence.
  • Development of AWS Athena/S3 Data Lake storing impression, activity, and structured log data.
  • Jenkins/EC2 configuration and maintenance.
Taos / IBM Consulting
12-202012-2021

Senior Software Engineer & Technical Interviewer

Full Time - Remote, US

At Taos Mountain, Inc., a company recognized for delivering tailored IT and cloud solutions that facilitate seamless digital transformations and comprehensive IT support, my tenure as an AWS specialist was foundational to the firm's overarching vision. My primary responsibilities centered on the development and stewardship of client products. Delving into the AWS ecosystem, I regularly interfaced with key components like AWS Lambda, DynamoDB, AppSync, and Timestream.

A significant portion of my role involved the intricate integration of our clients' social media applications with third-party platforms such as Slack and Stripe. This ensured that our outbound notification systems were robust, facilitating efficient communication and streamlined transactional operations. With an eye on security and consistency, I managed package (meta) development and conducted dependency scanning across approximately 100 microservices. This initiative aimed to cultivate a standardized library requirement set aligned with SOC I/II benchmarks and compliant with OWASP guidelines. Furthermore, my oversight of the out-of-band AWS lambda metrics was pivotal. By meticulously directing telemetry data to relevant auditing and analytics platforms, I was able to enhance event logging, refine application analytics, and provide accurate exception reporting.

Beyond the purely technical, I ventured into the domain of talent acquisition, conducting 4-5 technical interviews on a monthly basis. Leveraging my extensive engineering background, I evaluated candidates not only for their technical acumen but also for their suitability for ongoing and upcoming projects.

Requirements:

  • Proficiency in AWS components: Lambda, DynamoDB, Timestream, AppSync, API Gateway V1/V2, and more.
  • Expertise in Serverless development using Python.
  • Adeptness in API integration and development, with a particular focus on third-party platforms like Slack and Stripe.
  • Familiarity with Jira, Confluence, GraphQL, and Velocity Templates.
  • Emphasis on security, with skills in incident remediation and adherence to SOC I/II and OWASP standards.
  • Experience in technical interviewing, leveraging extensive engineering knowledge for candidate assessments.
Brute Technologies / Sole Proprietor
01-2002Current

Technology Development & Technical Consulting

Part Time, Remote, US and Anchorage, AK

Throughout my career, I've embraced roles that combined technical expertise and strategic vision, primarily within the startup ecosystem. In Alaska, my entrepreneurial activities often saw me adopting the CTO role for emerging companies. This allowed me to contribute to diverse projects, from advanced analytics solutions in advertising technology to innovative IoT systems for hydroponics, notably with Gardyn, a leader in hydroponic technology.

My work in advertising technology focused on creating sophisticated analytics and post-processing frameworks. I developed custom marketing analytics, ingestion, and dashboard facilities used in video on demand and web advertisement projects focused on revenue reporting and spend integrity. These systems enhanced the efficiency and effectiveness of campaign management and ad auditing. Within the IoT space, my projects included the development of control systems for hydroponic environments, contributing to more sustainable and technologically advanced agricultural practices. I gained extensive experience with Apache Kafka and cloud platform-specific event busses, crucial in handling command management and telemetry gathering of IoT devices in the field.

Beyond these areas, my technical expertise broadened to encompass projects such as SnapCraft/Snap packaging and the deployment of Ubuntu Core on specialized hardware. In the realm of telecommunications, I played a pivotal role in the development of high-capacity Asterisk dialer and telephony systems, which were further enhanced by integrating machine learning components for increased functionality.

Additionally, my foray into the medical sector involved pioneering work with generative solutions to create synthetic patient data for a substantial hospital network. In doing so, I designed and implemented mock interfaces to guarantee that the data mining processes adhered strictly to security and regulatory standards. This involved the use of Intersystems IRIS / VistA for the secure storage of non-sensitive patient data sets. My focus also extended to "Smart-on-FHIR" serverless software development, centering on adherence to HL7/FHIR standards, critical in ensuring the security and integrity of clinical work and the seamless integration of custom health solutions.

As an independent consultant collaborating with local Alaskan developers several projects were realized - including creation of a carbon offset platform tailored to the needs of Alaska native corporation land asset managers. My consultancy work for recognized software companies like Yellow Dog Linux, Serverless, Inc., and WaveFront involved a range of innovative solutions, encompassing telemetry integrations and high-performance computing elements.

In addition to hands-on project involvement, my home laboratory is a testament to my ongoing commitment to exploring new frontiers in technology. This private workspace serves as a dynamic environment for testing and honing skills in infrastructure, security, and cloud tooling.

Requirements and Services:

  • Proficiency in advertising technology, emphasizing analytics and post-processing.
  • IoT solutions expertise, particularly in hydroponics.
  • Data warehousing of advertising impressions and high speed real-time analytics queries using AWS Athena.
  • Advanced knowledge in SnapCraft/Snap Packaging and Ubuntu Core implementations.
  • Skill in developing Asterisk Telephony Systems and DICOM imagery processing in telemedicine.
  • Familiarity with designing and implementing carbon offset platforms.
  • Experience in telemetry integrations and high-performance computing.
  • Cybersecurity analysis and researching.
  • AWS consulting around API Gateway and Lambda integrations, Elastic Container and Beanstalk, and AWS Fargate.
  • Event bus and workflow guidance and planning including work surrounding Apache Kafka, AWS Kinesis and EventBridge, and AWS SQS prioritized heap queues (custom).
  • AWS S3 based synchronization for HL7/Radiological information for clinical studies.
CGI / IBM / AT&T
08-201612-2020

System Engineer / Senior Software Developer

Full Time - Anchorage, AK

At AT&T Alaska, a regional arm of the well-established telecommunications entity, the mission revolves around delivering communication solutions uniquely adapted to Alaska's distinct and challenging conditions. This commitment extends beyond standard telecommunication services, emphasizing a robust infrastructure mindful of Alaska's vast geographical scope and varied terrains. The ultimate goal is to guarantee uninterrupted connectivity and premier communication services for both residents and businesses and adhere to regulation and commitment requirements on a plant that spans an incredibly large area combining a very diverse set of communication protocols and transmission solutions.

In this setting, I was responsible for optimizing intra-project solutions and boosting workflow automation tailored to the region's specifics. I spearheaded the design of internal cloud solutions employing resources like SaltStack, KVM, LXC/LXD, and Ceph. In the web development spectrum, I employed a gamut of technologies, notably VueJS, Python 3, NodeJS, RiotJS, and AngularJS. For the specialized AT&T Alaska ERP initiative, I was instrumental in shaping custom service ordering front-ends. To strengthen our infrastructure, I laid the foundation for internal HTTP load balancers, rolled out BeyondCorp-style access controls, and crafted mail services influenced by Sendgrid's methodology. All these efforts were complemented by my steadfast support for AT&T Alaska's diverse network infrastructure, which traverses fiber, copper, and satellite pathways, keeping regional customer necessities at the forefront.

Skill Requirements:

  • Expertise in cloud solutions: SaltStack, KVM, LXC/LXD, Ceph.
  • Proficiency in web development: VueJS, Python 3, NodeJS, RiotJS, AngularJS.
  • Mastery in network management: HTTP load balancing, BeyondCorp access controls, Sendgrid-inspired mail services.
  • Familiarity with infrastructure support: AT&T Alaska ERP, diverse network channels, and data center upkeep.
  • Custom backup solution leveraging AWS S3 and AWS EC2/Lambda as part of a disaster recovery and cybersecurity initiative.
ABR Inc.
04-201301-2015

Geographic Information System (GIS) Specialist

Full Time - Anchorage, AK

ABR, Inc., based in Alaska, is an ecological research firm dedicated to scientific consulting, emphasizing environmental stewardship, research, and monitoring. Their mission fosters sustainable development and conservation in Alaska.

During my tenure with ABR, Inc., I focused on GIS web application development, especially processing vast sets of imagery from sources like NASA. This imagery was crucial for defining Alaska's terrain features. A significant part of my role involved removing elements like clouds from these images, ensuring optimal accuracy for our analyses.

I developed a GIS web application tailored for these large-scale visualizations and created a swath-based mapping tool for monitoring project progress across Alaska. These tools, underpinned by technologies like Python, PostgreSQL, MySQL, and GDAL, became indispensable assets for our team.

Additionally, I contributed custom modules to QGIS, continuously collaborating with fellow GIS experts to refine our applications and processes.

Requirements:

  • Mastery in Linux, Python, PostgreSQL/PostGIS, MySQL, OGR/GDAL, and ESRI GIS.
  • Expertise in data ingest, database partitioning, image analysis, and frontend web development.
  • Proficiency in GIS and terrain feature analysis.
Microcom / Sateo
05-200807-2012

Telecom Manager & System Engineer

Full Time - Anchorage, AK

At Microcom, a prominent satellite communication provider with an expansive sales footprint, I played a key role in enhancing the technological backbone that supported our sales and installation teams across Hawaii, Alaska, and Idaho.

To strengthen our inter-regional communication, I devised telephony services, intricately working with Asterisk PBX. I developed modules and fostered integrations that bridged our operations seamlessly.

A noteworthy initiative was the creation of a custom web service tailored for field data capture. This solution not only replaced traditional methods but also ensured the smooth integration of field data with our in-house systems for in-depth analysis. Advanced signature processing was a highlight, encompassing detection, orientation, and alignment functions. With this, we could generate bespoke PDF reports enriched with refined signature stamping.

To realize this project, I predominantly utilized Python and Django. Furthermore, I delved into reverse-engineering mobile-specific form input applications, thereby refining our data capture capabilities on the ground.

Supplementary to these, I was entrusted with managing call center voice operations and innovating internal tools that augmented our data processing, sales insights, and workforce monitoring efforts. A custom software solution for international voice termination was also brought to life under my helm, benefiting a Microcom subsidiary.

Requirements:

  • Expertise in Python, Django, telephony services, and web service development.
  • Proficiency in Asterisk PBX integration, data integration, mobile application adaptation, and voice management.
  • Experience in edge computing, local caching, and network services.

Skills

Software Development

My foundational strength in software development lies within Python, JavaScript, and TypeScript. I've extensively utilized web development frameworks such as Flask, Django, and the modern FastAPI. Alongside this web expertise, I've ventured into desktop software development using frameworks like QT and GTK+ and delved into cross-platform solutions using React Native, Flutter, and Electron. This diverse background has also seen me gain experience in the evolving world of blockchain networks, specifically in automating market activities. Asynchronous development methodologies and modern web technologies are integral to my toolbox, allowing me to deliver robust solutions. While I am highly proficient in these areas, I also have a foundational understanding of languages like C, C++, Go, and Rust, ensuring versatility in my skill set.

Technologies: Python, JavaScript, TypeScript, Flask, Django, FastAPI, VueJS/NuxtJS, Angular, RiotJS, React, Python/Tornado, Python/AsyncIO, Redis, ZeroMQ, QT, GTK+, React Native, Flutter, Electron, Blockchain, C, C++, Go, Rust. Mumps and Intersystems ObjectScript

Workflow and Event Bus Solutions

With a strong foundation in event bus and workflow solutions, my work emphasizes advanced event-driven architectures, incorporating AWS services and open-source platforms. This foundation extends to the realm of IoT and telemetry, where I adeptly handle the consumption and processing of substantial data volumes, a key aspect of real-time analytics and decision support systems. These skills are particularly valuable in managing high-volume workflows in complex SCM systems and enhancing software supply chain security. Integrating AWS SQS, EventBridge, and Kinesis/DynamoDB with Apache Kafka, RabbitMQ, and Celery, I contribute to creating scalable, efficient, and secure solutions applicable in software development, IoT, and telemetry.

Technologies: AWS SQS, EventBridge, Kinesis/DynamoDB, Apache Kafka, RabbitMQ, Celery, SCM systems, software supply chain security, IoT telemetry consumption and processing.

DATABASE MANAGEMENT

Databases are the backbone of most applications, and I've had direct experience with platforms such as MongoDB, SnowflakeDB, and PostgreSQL. From high-speed indexing to sharding, my experience spans both the design and deployment phases. I'm passionate about leveraging the full capabilities of a database, often stating, "Code for the database, don’t database for the code."

Technologies: MongoDB, SnowflakeDB, PostgreSQL, MySQL, Oracle, PL/SQL, PL/Python, Foreign Data Wrappers, SQLAlchemy and other such ORMs.  Intersystems Caché and IRIS for Health.

SYSTEMS & NETWORK ENGINEERING

I bring a solid foundation in networking, with expertise in routing, high-availability, and custom network logic developed in Python. My familiarity with distributed computing tools allows me to design systems that are both robust and scalable. My hands-on experience with Linux distributions has further enriched my system engineering skills.


Technologies: Ceph, SaltStack, Debian/Ubuntu, Redhat/CentOS, Alpine, AMPQ/MQTT. AWS VPC, EC2, and Private VPN. AWS S3 and S3 Glacier.

OBSERVABILITY & MONITORING

Ensuring efficient system monitoring is critical, and with my involvement in OpenTelemetry, I've worked on integrating various metrics, logging, and trace events for platforms like DataDog and AWS X-Ray. My emphasis has always been on proactive monitoring, ensuring minimal data loss, and efficient payload handling.

Technologies: OpenTelemetry, DataDog, AWS X-Ray, CloudFront, CloudWatch, EventBridge, Kinesis.

CLOUD & EDGE COMPUTING

I've engaged with a variety of cloud platforms, with a deep-rooted experience in AWS, managing services from Lambda to CloudFormation. My work extends to edge computing, emphasizing multi-site synchronization and the distribution of local network services.


Technologies: Digital Ocean, AWS, Microsoft Azure, RedHat OpenShift, Docker, Kubernetes, Serverless, CloudFlare.

TELECOMMUNICATIONS

Telecommunications form a significant part of my expertise, from deploying Asterisk configurations to designing distributed PBX systems. I've delved into the intricacies of Internet services, ensuring optimal communication across wired and wireless networks.


Technologies: Asterisk, SIP, DHCP/TFTP, Mesh Networks, Satellite Communication.

Geographic Information Systems (GIS)

Geospatial systems have been an area of focus, especially with data management for large field surveys. I've utilized tools like Python and PostGIS to ensure the best in class geospatial solutions, with a special emphasis on load distribution based on models.

Technologies: Python, OGC GDAL/OGR, OSM, NASA/USGS, PostGIS, MongoDB.