- Greensboro/Winston-Salem, North Carolina Area, US
- [email protected]
A Cloud Architect, Innovator and Inventor with industry experience leading major successful projects across organizational boundaries to seamlessly migrate to new systems and adopt new technologies.
My goal is to help companies find their own secure path to the cloud.
My personal journey with security and services started over twenty years ago at AT&T Bell Laboratories writing source code for one of the first Internet firewalls on our own B-1 hardened secure version of UNIX. It was fun and fascinating to hack into systems (with permission), to find and anticipate security holes and write patches for the operating system.
When the world wide web exploded into existence, Jim Bidzos of RSA Security personally recruited me to become the security architect at a little startup called Visto -- where we invented ways to securely store and synchronize bookmarks, files, photos, and email on the Cloud. We were like Dropbox ahead of our time but unfortunately broad-band wasn't widely available and we wound up with plenty of patents but not enough customers.
Amazon recruited me to help them with scalability and security. As a principal engineer, I led the teams that developed Amazon's first distributed service "CustomerMaster". We created our own Object-Relational Mapping (ORM), giving an editable graph of objects to our service clients. Since CustomerMaster managed all of Amazon's customers and their shopping preferences, it had to be highly available, redundant, and fault tolerant.
But before we could build a service around customer data, we had to refactor a huge code base and gather all customer information into its own database cluster, because at that time Amazon was relying on Oracle replication to synchronize customer data across six databases. The system was straining under the replication load and could not have scaled through the Christmas season that was looming ahead of us. Failure was not an option. Those were exciting times!! But when my wife became chronically ill, I took a sabbatical to help her recover and to enjoy raising my kids and seeing them off to college. Now, I've returned to scaling, the cloud, security and programming, working freelance as a cloud consultant (naturally focusing on AWS), and most recently helping SocialCode.com migrate to Docker containers.
Docker amazes me with its simplicity and elegance. I enjoy helping companies discover how docker and the cloud can simplify their architecture and development/deployment cycle and help them find the right migration path while minimizing the pain and maximizing the gain.
Moving legacy systems to the cloud can be an enormous challenge -- like performing open heart surgery on a beating heart -- because business never stops.
I enjoy helping companies to find their own secure path to the cloud, and to find a workable migration path from where they are to where they want to be. Although my focus is on architecture and scalability, I also enjoy writing and refactoring source code with innovative solutions and inventing tools to bridge the gaps.
After a career pause to focus on family, I returned to cloud computing which is my first love in terms of technology (the second being computer security).
SocialCode Inc: December 2015 - present: Cloud Computing Architect/Consultant guiding their path from classic services to Docker containers on ECS and their own AWS Virtual Private Cloud. [SocialCode is one of the largest providers of social marketing to Fortune 500 companies with a team of data scientists and software systems to optimize advertising performance across the major social media platforms.]
Trove.com: September 2014 to November 2015 closing: Coordinator for projects that cut across multiple teams. [Trove.com was a social news curation service, similar to FlipBoard, for Graham Holdings Company]
Unfortunately Trove.com did not acquire enough curators or readers to sustain its business model, so it ceased operations in November 2015.
Small Personal Projects to Stay Current: 2010 to 2014:
One of roughly a dozen principal engineers at Amazon at that time
Customer Master Service: Architect and Project lead for Amazon's first distributed service. CustomerMaster managed all of Amazon's customer information, including email and street addresses, credit card tokens, purchasing preferences, login and authentication.
Customer Master was a mission-critical service, highly available, fault-tolerant, and redundant, without which no customer could place an order in the United States, France, United Kingdom or Japan
An Object-Oriented Programming API presented objects to the business logic clients such as the shopping cart and ordering modules, which allowed complex interactions and even the creation of new objects on the client, which could all be synchronized to the Customer Master Service with a single, atomic save using optimistic locking to preserve data integrity and high availability.
The first use of globally unique identifiers at Amazon for client-side object creation.
Session Directory: Designer and Project Lead for Amazon's first sharded database solution. Session Directory managed all of Amazon's shopping cart information, including grouping catalog items into orders.
This two-tier solution predated the Customer Master Service. It partitioned shopping carts and orders into multiple databases, using a weighted random allocation that allowed our DBA's to add new partitions and redistribute the load for maintenance. Each horizontal partition (or shard) consisted of a replicated pair of identical databases, so a hot-standby was always available.
Managed this priority zero project (above all others), without which Amazon's infrastructure could not have scaled for Christmas of 2001. Went from concept to production in four months, and launched with no down time by adding the initial extra partitions with an initial weight of zero.
Everything worked. No existing shopping carts were lost and the shopping cart databases scaled through the Christmas season.
Customer Master Database: Project Lead for Amazon's first successful large-scale database refactoring project, taken from concept to production in four months.
Prior to this, customer data was scattered, intermingled and co-joined with shopping cart and order information across six huge Oracle databases on the most powerful unix hardware available that time. These databases were already straining under the load of six-way multi-master replication, because every update resulted in six writes! And replication conflicts were common.
Goal: Untangle and relocate all customer data into a single database with universal coordinated time (UTC) dates, and refactor over a million lines of code to access the new location.
Without this refactoring, Amazon's existing databases could not have scaled for Christmas of 2001.
Since project had already been attempted and failed once before I came to Amazon, resulting in a lengthy outage and negative publicity, Jeff Bezos gave two demands: don't make the cover of the Wall Street Journal, and only take the site down for one hour.
To prepare for this project, I contacted the team members of the failed project to assess what went wrong and to build fail-safes into our process to avoid repeating history. On the surface, the objectives seemed to be impossible, because customer data was involved in almost every aspect of Amazon's systems from shopping to fulfillment, but we were able to achieve them following a phased approach:
Phase 1 -- Refactor all SQL statements to avoid joins between orders, shopping carts and customers, (which would no longer work when those tables were not locally available on the same database.)
Phase 2 -- Virtual Customer Master Database. We created a virtual customer master database consisting of nothing but DbLinks to the real database tables. This allowed us to test our new SQL and allowed us to begin to refactor the source code.
Phase 3 -- Refactor All Source Code. We used a call-graph analysis tool to find all paths to statements affecting customer data, and refactored the code to access customer data at its new virtual location.
Phase 4 -- Dual Mode Access. In the staging environment, we cut over completely to the new test database, but in production, the new code paths toggled at run-time to use the old locations. The source code and databases were instrumented to report every incorrect access to the old locations.
Phase 5 -- Live Launch. On launch night, at midnight we took Amazon offline, and allowed the 6-way replication logs to play out and initialized the new Customer Master Database, toggled our run-time switch to access the new location and brought the system back online within the hour. Every system worked.
Phase 6 -- Post Launch. One secret to our success was that we left DbLinks behind in the six old databases pointing to the new Customer Master Database. This fail-safe strategy allowed any tools and utilities that might have been overlooked to continue to work; but we contacted the owners and gave them a short time window to correct their code. Thirty days later, the DbLinks were removed and the new Customer Master Database project was completed.
We hit all our milestones and kept all our promises. The new customer database scaled through the Christmas season; and replication conflicts were completely eliminated by design.
Personally recruited by Jim Bidzos and David Cowan of RSA Data Security and Bessemer Venture Partners, I joined Visto in 1996 with an equity position in an exciting startup focusing on Cloud computing or "ubiquitous computing" as we referred to it in 1996.
We quickly developed an amazing suite of services including synchronization of files, emails, bookmarks (across many browsers), and offered a secure gateway to facilitate authentication and synchronization.
I personally developed Visto's load-balancer, HTML template engine, File Server and Email Gateway, using Java and C++.
Invented applet-based authentication, temporary certificates, and remote encryption, among a dozen patents awarded that have resulted in over three hundred million dollars in settlement payments from Research In Motion, Microsoft, Seven Networks and other infringing companies.
Visto featured file-synchronization and global secure storage ten years before Dropbox.com!
Sadly, Visto was ahead of its time -- broadband was not widely available yet so the business model itself never proved to be profitable. Visto was later acquired by Good Technologies.
A member of the team that developed one of the first Internet firewalls, the SV/MLS Computer Watch Trusted Gateway, and the SV/MLS B1 Secure version of Unix.
Wrote device drivers, utility commands, administration GUI, and the Postfix Alarm Language (PAL) used for real-time intrusion detection.
This was hard-core C-language, operating system programming. But Bell Labs was so focused on pure research that they rarely managed to capitalize on their inventions.
My wife Cynthia became mysteriously ill with what we now call chronic fatigue, so I took a sabbatical to help her and to spend more time with my young sons. Unfortunately our son Samuel was stricken with schizophrenia, so I extended my sabbatical indefinitely (except for some small projects to stay current with technology) until we tragically lost him to suicide in 2012. After grieving we decided to focus on the future and loving those that are with us now, and I returned to the cloud computing , but worked as a telecommuter to support my wife.
Artificial Intelligence, Pipe-line Processing
Technology adoption and innovation must focus on business needs and follow a sensible migration plan so companies can break free from past technologies and technical debt with minimal interruption.
Large projects gain the most traction when they follow a clear plan that addresses the needs of every stakeholder so the entire organization can "buy in" to the plan and achieve the impossible.
Leading projects that cut across organizational boundaries with the potential for massive impacts on developers, designers, and product managers is my definition of "normal."
An innovative solution can reframe the problem and even change how we think about it. And little inventions can fill in the missing pieces so new technologies can be adopted before they mature completely.
Docker, AWS, ECS, GIT, Jenkins, Django, Linux Admin, TCP/IP Networking, Distributed Systems, Cloud Computing,
See patents.google.com for full details on patents on Cloud Security and Synchronization
* Please note that when these patents were filed, the term "cloud computing" was not in common use, so the patents use the terms "global", "remote", "distributed", "ubiquitous" and "universal" to discuss what we now know as "cloud computing"