Work Experience | Meta - Cambridge, Massachusetts, and New York City, New York
Software Engineer
April 2022 - June 2024
Skills used: C++ (up to and including C++20), Java, python, PHP/Hack, GraphQL,
TypeScript, Kotlin, Swift, gTest, gMock, folly, OpenXR, Gradle,
EMG, Android development (for phone and Oculus), iOS development,
metagen, LLaMa, bento (Jupyter notebook)
-
Member of Multi-Modal-Input-Platform team, developing neural interfaces.
Member of the Multi-Modal-Input-Platform team, developing drivers and
application software for Meta's EMG (Electromyography) wristbands that
detect signals in motor neurons in the wrist. Used the EMG and IMU data
available from these APIs as a controller for augmented reality glasses,
and as an additional control device for virtual and mixed reality on
Meta's Quest platforms.
-
Controller Mapping project.
Team lead on a project to integrate EMG data into multiple VR/AR
operating systems, mapping EMG gestures onto Oculus controller buttons,
joysticks, and IMU position, to allow control of existing applications
without requiring the use of our EMG libraries. This work enabled rapid
prototyping uses by internal teams to evaluate various use cases of EMG
with end users. Performed division-wide demonstration of playing the
video game Moss 2 entirely with EMG gestures alone - no controllers.
-
POC for IMU code.
Point of contact for all IMU (Inertial Measurement Unit) code for Meta's
wristband APIs, using accelerometer, gyroscope, and magnetometer data to
determine wristband orientation, acceleration, and derived velocity values.
This work provided position and orientation of the user's wrist in
3D space, as well as acceleration data. This IMU data was also used as
training data (alongside EMG data) for on-device AI models to detect
gestures and do handwriting recognition.
-
Allowed Keyboard Support on Quest via EMG gestures.
To help other teams test out the user experience of using EMG swipe
gestures (up/down/left/right) with a keyboard, created a keyboard mode
for controller mapping to save an Augmented Reality glasses project
whose deadline would have otherwise been missed. After starting keyboard
mode via an EMG gesture, the poses (location and orientation) of the
Oculus controllers in the virtual environment were modified by my code
such that one controller hovered in front of the user, aiming at the
keyboard. When EMG gestures came in to swipe up/down/left/right,
code would impose a new pose on the controller, aiming it to the next key
up/down/left/right from the one it had previously been aiming at,
providing visual, audio, and haptic feedback that a new key had been
selected. Implemented a state machine to determine which key (and which
controller orientation) to move to next. Demonstrated typing
"Hello, World!" entirely via EMG gestures, on an unmodified Oculus virtual
keyboard (which itself didn't have d-pad support).
-
OpenXR Sample EMG Wristband app - tic-tac-toe.
Wrote an OpenXR tic-tac-toe sample program for the Oculus Quest, controlled
by the user wearing an EMG wristband and making moves on the board with
different in-air finger gestures. This was one of the standard apps given
to other teams to illustrate the use of EMG.
-
Hackathon project: Using LLM in Facebook Search
Creator and sole member of a hackathon project to let users ask an LLM
(Large Language Model) chatbot AI to help them find old Facebook (or
Workplace - Meta's "internal Facebook") posts that they can't find.
Implemented in a bento notebook (Jupyter notebook) in python, this
project took an English user LLM query (like "I went to Makerfaire a
few years ago, and one of the coolest things for exhibitors was this
treat truck that they had the night before, with delicious pastries")
and rendered Facebook posts that the user had seen, picked by the
LLM. First I took the user's query and passed it to the first LLM
(of 2), asking it for good search terms for the user's query (in this
case it might come back with "makerfaire treat truck"). I then
performed a GraphQL query against Facebook's backend datastore to
obtain a list of posts that the user had seen that matched those terms.
I formatted the results into a new query for the second LLM, asking it
to score each post against the user's original query, from 0.0 to 1.0,
and to give an explanation for each score. I took the resulting scores,
sorted them (in case the LLM failed to sort them from highest to lowest),
and presented the results as rendered HTML, with a score, clickable
link, and AI justification for each score. This approach helped deal
with a 2K token limit on queries and responses, and the lack of an
embedding feature that would have allowed large amounts of articles
to be read.
-
Mobile development - Meta Wearables App (iOS and Android)
Added an Advanced Diagnostics page with multiple features to the
"MWA" Meta Wearables App ("Meta View"), both on Android (in Kotlin)
and iOS (in Swift), to help diagnose issues with paired EMG wristbands.
Features included a tabbed interface displaying live visualizations of
EMG and IMU (acceleration and gyroscope) data from the wristband,
on-screen indicators of observed and derived gestures, and more.
Created a framework of detectors to monitor the EMG wristband and
determine problems, such as low battery, device asleep/awake, power-line
interference, signal droppage, EMG/IMU timestamp drift, and others.
Displayed live output of these detectors in the app. Android work was
done in Kotlin and Java, and iOS work was done in Swift.
-
Handwriting Study Training Data toolset
Wrote multiple "text studies", which were specific handwriting training
experiences given to end user research subjects to gather EMG and IMU
handwriting training data for our AI handwriting models. This involved
custom prompts for users to follow, instructions on how to use the
EMG wristbands, specific random text of various kinds for the user
to "write", on-screen timers and UI elements for study navigation, etc.
Data gathered by this tool was used to train Meta's EMG handwriting
models (which allow users wearing an EMG wristband to write on a desk,
their leg, or their other palm as a form of data entry).
Studies interacted with the hardware, recorded video streams of the user
for ground-truth about handwriting gestures performed, and recorded all
EMG and IMU data in the cloud.
-
Websocket Unit Tests.
Write complicated/exhaustive unit tests for the websocket protocol for
early prototype EMG devices. This involved starting the team's use of
gTest and gMock, as well as complicated futures/promises interactions
via the folly library.
Akamai - Cambridge, Massachusetts
Senior Software Engineer II
May 2020 - April 2022
Skills used: Java, Python, MapReduce, HDFS, Hive, Yarn, Docker,
Ant, git, JUnit, MRUnit, Jira, JSON, JSON Lines
-
Created GPSTool geocoordinate analysis tool suite
Created the GPSTool project, a big-data cluster-based log analysis tool
suite for the mining and analysis of geocoordinates found in web log files.
This highly scalable tool suite included multiple commands each implemented
as one or more chained Hadoop MapReduce jobs, including
locationOfInterestActivity, extractAllGeoCoordinates, extractPaths,
extractAllGeoCoordinatesToJson, tagPointsAsDeviceLocation,
tagClusteredPointsAsDeviceLocation, and tagCityAndLowPrecision.
The GPSTool suite read data from HDFS or local file systems, ran through
a user-specifiable pipeline of commands to classify point types and
interaction types, and output the results to either HDFS or local file
systems. The tool would typically run with dozens of terabytes of input
and generate multiple terabytes of output.
-
extractAllGeoCoordinatesToJson
This command read input from one of many known (or custom) Akamai log file
formats and extracted all geocoordinates from web paths/referrers/cookies
(including multiple levels of nested urlencoding). Coordinates could be
from one field containing a lat-lon pair, or from multiple fields where
lats and lons needed to be discovered and paired. Multiple levels of
heuristics helped rule out false positives and determine a confidence level
in each extracted coordinate. Geo-database lookups identified points known
to be associated with city centers, etc. Extracted log information
(and any initial classifications) was output in the EGLI format (see below).
-
locationOfInterest
Returned all coordinates within a specified range of distance from a
specified point or area. One example output format was a KML file (for each
location of interest) containing all hits within the specified range,
to be viewed in Google Earth.
-
tagCityAndLowPrecision
This looked at all occurrences of not-yet-classified low-precision
coordinates across all requests for the specified period and classified
any low-precision coordinates that were *popular* as CITY. Those
low-precision coordinates that had less than some threshold of hits were
classified LOWPRECISION.
-
tagClusteredPointsAsDeviceLocation
This command performed a cluster-analysis on all of the not-yet-classified
high-precision coordinates across all requests for the specified period.
Any popular hits were classified as POINT (a high-resolution-yet-common
point). Other hits were grouped into clusters or stray points. Any
points in a cluster were classified DEVICE_LOCATION, whereas any
remaining high-resolution points were left as UNKNOWN.
This was one of two approaches taken to try to find coordinates that
represent the location of the actual device making the request (as
opposed to queries for other external locations on the globe).
-
extractPaths
Another approach to determining which points could be classified as
DEVICE_LOCATION involved looking for perceived movement in subsequent
geocoordinate hits for a given identifier (such as the client's IP
address). This command created "paths" by looking at all extracted
geocoordinates for each client IP address, sorted by time. If a point
was close enough in time and distance to a previous point (such that
it could be reached in time at driving speed or less), then it was
considered part of that path. This allowed us to handle even
carrier-grade NAT'ed IP addresses with potentially thousands of
users on the same IP address, outputting paths for each actual
user/device.
-
tagPointsAsDeviceLocation
The tagPointsAsDeviceLocation command took the output of extractPaths
and any extracted geo log information and would classify any points
that occurred within a path as being DEVICE_LOCATION. This was an
alternative to the clustering approach above, and could be applied
in parallel to identify points that *were* device locations but were
too far apart to cluster (such as points on a freeway with fast
moving vehicles).
-
Author of the EGLI data format
EGLI (Extracted Geo Log Info) is a JSON Lines format used to hold extracted
geocoordinate infomation from a variety of origin log formats, including
custom/bespoke intermediate formats specified on-the-fly. Individual input
log lines would generate one EGLI line each, which contained lists of
Geo objects with their coordinates, precisions, inferred classifications,
origin fields, etc.
-
Author of the GPSToolBatch pipeline specification
GPSToolBatch is an XML format that defines complex MapReduce pipelines,
chaining together the output of one MapReduce command into another,
allowing for configuration of each tool, tee-ing off output of any of the
jobs to external locations, specification of task inputs from sources other
than their immediate predecessor (such as earlier steps or external HDFS
inputs), and specifying ultimate pipeline inputs and outputs.
- Created the build, testing, and kitting infrastructure for
GPSTool, including targets to distribute via debian package, .tgz file,
or Docker container. Everything was rigorously tested via JUnit and
MRUnit.
- Created a classification tool for another project that involved
looking at patterns of software downloads to identify standard usage
patterns, leaving the remaining potentially-suspicious activity for human
analysis. The tool examined a week's worth of requests and identified
patterns (e.g. first downloading this manifest file, then this catalog
file, then sometimes this .exe, etc) of downloads across all users.
It output these patterns (combinable via regexp and ignoring-line-order
options) along with statistics about average timings, delta times, etc.
Amazon.com - Cambridge, Massachusetts
Software Development Engineer II
August 2018 - November 2019
Skills used: Java, Scala, Python, IntelliJ, Eclipse, Ant,
git, JUnit, Jira, Elasticsearch, Apache Spark,
Zeppelin, many AWS services (S3, Lambda, CloudFormation,
CloudWatch, SQS/SNS, EMR, Athena, IAM, Amazon Translate),
XML Schema, JSON, CSS, XPath, and many Amazon-internal
tools including brazil, odin, octane, and Eider.
- Worked on three different ingestion pipelines at Amazon (across multiple
groups), to supply content to Amazon Echo devices (Amazon Alexa) for the
purpose of answering arbitrary customer questions.
- Made changes/enhancements to codebase relating to blacklisting, metrics
delivery, multi-locale support, Elasticsearch ingestion, and CloudFormation
pipeline construction/configuration.
- Used this pipeline to acquire, normalize, and transform curated customer
content (in several languages), then ingested the cleaned result into
Elasticsearch indices. Deployed these mission-critical indices onto
production systems across the world, on a regular basis. These indices
allowed Alexa to answer long-tail difficult-to-answer questions.
- Added custom support for multiple content feeds, including Wikipedia
(English, German, French, Spanish, Italian), Simple English Wikipedia,
Wikihow, MNN.com, and an initial run of gutefrage.net. Later systems would
generalize this feed mechanism and eliminate the need for individual
content code.
- Solved a problem fetching images that was affecting every user of an
Echo Show or Echo Spot globally. Planned new functionality based on this
fix to support multiple images per content feed, which was then handed off
to the appropriate group.
- Helped in a lengthy high-risk transition from Elasticsearch 5.3 to 6.3,
all the while maintaining customer SLAs.
- Helped team migrate to using SAMToolkit to allow testing of the entirety of
pipeline construction in actual deployment environments, including the
creation and configuration of all AWS resources. This saved risky failures
during deployment that would affect customers.
- Wrote integration tests that spun up custom EMR clusters to do ingestion
runs, then spin them down when complete. These were integrated into
production pipelines to ensure code/content quality and avoid regressions
or new bugs before code deployment.
- Owner of an aggregate content feed of news, specifically crafted for
speakable content. Deployed multiple drops to downstream customers.
- Periodic on-call work involved looking at our live CloudWatch dashboards
daily, resolving tickets submitted by downstream customers, maintaining
data pipelines, and fixing build problems.
MIT Lincoln Laboratory (through Minuteman Group Inc) - Lexington, Massachusetts
Consulting Software Engineer
October 2009 - August 2018
Skills used: Java, Swing, Android SDK, Blackberry SDK, Python, Eclipse, Ant,
Maven, Mercurial, git, JUnit, Jira, several embedded device
projects, Google Maps APIs, 3D printing, OpenSCAD, Epiphan SDK,
XML Schema, JSON, Arduino development, Raspberry Pi, O-Droid,
Django, Cuckoo Sandbox, minio.
Security Clearance: Yes
-
Architect/Sole-developer of K0ALA human emulation Cyber Range Testing tool.
Developed from scratch a KVM-based Cyber Range Testing and Malware Analysis
Tool (K0ALA - KVM-based Zero-Artifact LARIAT Agent), which allows software
to drive arbitrary GUI applications on a system under test (SUT) without
having any code or other artifacts on that SUT. This design allows K0ALA to
appear (to malware or human observers) to be human, and avoids contaminating
the testing environment with detectable artifacts. Wrote image recognition
code to monitor a framegrabbed stream of video output from the SUT, looking
for user-provided key images within each frame. Wrote actuation code that
allows K0ALA to respond as a human would with artificial keyboard and mouse
events. Created a human attention emulator engine that simulated waning
human attention, causing K0ALA to occasionally switch between complex
off-host tasks. The project grew for eight years, and was used by multiple
government agencies. Gave classes and education about K0ALA to the
US Air Force Research Laboratory (AFRL Rome), C4AD in the Joint Chiefs of
Staff, United States Joint Forces Command (JFCOM), the 46th Test
Squadron out of Eglin Air Force Base, and the FAA. This software was a key
competitive differentiator for MITLL in the Cyber Range arena.
-
Invented K0ALAScript scripting language
Wrote a Visual Programming Language (stored as an XML-based scripting
language) to describe how to perform human actions such as typing, using
a mouse, turning physical switched and knobs, and raising/lowering voltages
on arbitrary input lines.
-
Implemented Multiple Control Protocols
Wrote a pluggable framework into K0ALA that allowed custom control protocols
to be created for communicating with the SUTs. For the most
artifact-sensitive applications, created the EpiphanControlProtocol that
used COTS hardware (the Epiphan KVM2Ethernet and KVM2USB devices) to do
video processing of a framegrabbed stream of the SUT's video output and
provide artificial keyboard and mouse input to the SUT through PS2/USB
cables. For use cases less concerned about artifacts, wrote the
VNCControlProtocol which allows monitoring and controlling a SUT via the
VNC Remote Desktop protocol. For cases where the SUT is a virtual machine
running on a VMWare ESX server, wrote the ESXControlProtocol, which
communicated with the ESX server to get a VNC interface for that virtual
machine. For testing purposes, created the LocalControlProtocol which uses
Java's Robot API to monitor and control the same machine that K0ALA is
actually running on. For devices needing alternate input/actuation, created
the ArduinoControlProtocol (and a K0alaArduinoServer Arduino sketch) to
communicate with an Arduino board via ethernet to turn servos, actuate
switches, and raise/lower voltages on arbitrary input lines.
-
Created K0ALAStudio integrated development environment
Wrote an IDE for K0ALA that let the user write K0ALAScript via a visual
programming interface, or directly via XML. The tool allowed the user to
connect to a SUT via one or more control protocols, interact with the SUT
live via keyboard and mouse, grab key images for future image searches,
then test the script live. This is the main frontend for K0ALA development.
-
Created K0ALAListener server service
Built a Linux service called K0ALAListener that accepts commands via socket
or AMQP to start or stop connection sessions and jobs, and open up display
windows for existing sessions when needed (if an X-Window display is
available). Once a K0ALA project has been developed in K0ALAStudio, it is
actually deployed in a cyber event within a K0ALAListener server.
K0ALAListener provided a command-line interface for users connecting to its
open socket, or via the RemoteK0ALA tool.
-
Actuated many applications with K0ALA, for demos and sponsors
Gave demonstrations of K0ALA on a regular basis, to many sponsors.
Wrote K0ALA actuations to let K0ALA browse the web, play Minesweeper,
chat via Instant Messenger, play Angry Birds, drive the Lightweight
Portable Security (LPS) LiveCD, play the first five levels of PAC-MAN,
navigate through the dashboard of an XBox 360 Gaming Console, and play
Halo: Reach against itself on two Xbox 360s for literally years nonstop
(as a permanent demo and reliability test). Later Unclassified examples
include K0ALA actuating Theater Battle Management Core Systems (TBMCS)
in Air Operation Centers, actuating MIT Lincoln Laboratory's ancient ROSA
RADAR operator console software, and actuating Global Command & Control
System (Joint) (GCCS-J) for the C4AD in the Joint Chiefs of Staff. K0ALA
is in use across the country.
-
Conference Presentations
Presented K0ALA at MITLL's Cyber and Netcentric Workshop (CNW) classified
conference in 2010, 2014, and 2017. Presented K0ALA at Malware Technical
Exchange Meeting (MTEM) in 2010.
-
Multiple Mobile Development Projects.
Wrote multiple mobile device applications for various projects.
Applications included a full featured BlackBerry storm application and an
Android Mapping application with Bluetooth device integration.
Mentored MIT intern students on research into the remote control of
arbitrary iPhone and Android applications and OS behavior.
-
Mobile Device Actuation Station
Designed and built the Mobile Device Actuation Station project, a 3D-printed
2-axis robot that moved a capacitive stylus across mobile devices such as
iPads and tactical tablets to actuate them. The project was stopped for
funding reasons mid-project, but plans included a mounted overhead camera to
obtain video from the device, which would be fed to K0ALA via a custom
control protocol. The robot was designed in OpenSCAD and printed on a local
3D printer. The goal was to pretend to be a human on mobile devices which
required actions such as swipes and touches, without hacking the devices or
leaving any artifacts (software) on them.
-
3D printing education
Gave multiple talks on 3D printing, both for division meetings and for
MITLL's Build Anything course.
Kiva Systems - Woburn, Massachusetts
Senior Software Engineer
January 2008 - August 2009
Skills used: Java, C, C++, SQL, Swing, Ant, Maven, Subversion (svn),
JUnit, Jira, Embedded device work, low level protocol design,
mobile robotics, wifi, emulation, AB DF1 protocol,
programmable logic controllers (PLCs), hardware testing, RSLogix
-
Sole developer responsible for Kiva's Emulator.
Heavily modified, enhanced, and maintained Kiva's Emulator, an application
responsible for simultaneously emulating 1000+ Kiva robots, tens of
thousands of movable storage pods, sensor equipment, multiple vertical
reciprocating conveyors (VRCs - elevators for automated non-human use),
and large industrial hardware.
-
Equipment Communications.
Designed and implemented the communications infrastructure through which
Kiva's Material Handling System (MHS) talks to equipment, with pluggable
protocol layers for communication via different types of devices,
and runtime-switchable transport mechanisms (which allowed
serial-line protocols to be tested via TCP/IP-based unit tests).
Wrote a BB232SDD16 layer for communication with an array of presence
sensors used on both the Automated (Unmanned) Shipping station and the
Tape and Dunnage station for diapers.com (as well as creating an emulation
for those presence sensors). Wrote an ABDF1 layer (including a full
Allen Bradley DF1 stack created from scratch in Java) for communication
with both our VRCs/elevators and with a large automated trash dumper for
Walgreens (again, along with emulations of the VRCs and trash dumper).
Designed the system to be modular/expandible, allowing rapid prototyping
of new hardware and hardware emulations.
-
Mobile Robot Communications.
Maintained the DUA (drive unit agent - the code that communicates with
the robots via wifi and tells them specifically what to do). This
multi-level state machine implements the various high and low level
missions that a robot needs to perform, such as starting up and
determining where a robot is on the floor, driving to a charging station
to recharge its internal battery, or fetching a storage pod and presenting
it to a human picker. Added automatic recovery of lost wifi comm,
a problem which was costing customers serious amounts of time and
lost profit. Added various data reporting features to collect information
on the robots' battery charge levels, internal state,
maintenance-cycle-requests, etc. Did extensive work on the lifting
procedure for the pallet-lifting model of Kiva robot.
-
Automated (Unmanned) Shipping station.
Team Lead on a complex feature for diapers.com called Automated Shipping
(also known as Unmanned Shipping). Robots carry special shipping
pods/shelfs which hold completed ready-for-shipping boxes. They are
called to a shipping station by the MHS, they arrive, and then ask the
station for permission to enter. The station then communicates with an
array of presence sensors mounted to a conveyor belt to see if the conveyor
is empty. When the conveyor becomes empty, permission is granted for the
robot to drive beneath the conveyor, with the shipping pod/shelf being held
above the conveyor. The robot turns, then drives beneath a scraper bar that
holds back the shipping boxes, depositing them on the conveyor. That
conveyor leads to a flexi-conveyor, which is extended and routed out into
a shipping truck. After logging into the unmanned shipping station the
ship worker selects which carrier's truck was there (UPS, USPS, Fedex),
takes the flexi-conveyor into the truck, and then boxes specifically for
that carrier arrive and he/she starts packing. There is no danger of boxes
arriving too fast, because of presence-sensor-based control at the station.
In addition to leading the team, wrote the workflow code (the business
logic for the shipping station), the controller communications, emulations
for the presence sensors, several unit and integration tests, and a median
filter to eliminate sensor noise experienced with the first hardware
prototypes.
-
Work on 2 and 3 floor Mezzanines with VRCs.
In-house, Kiva had constructed both a 2-level mezzanine and a 3-level
mezzanine, for mobile robots to drive on and fetch storage pods from. The
former had two controlled VRCs (elevators) and the latter had a new
double-carriage VRC (able to lift multiple robots simultaneously).
Worked heavily on the code that both controls these VRCs and keeps
robots from using them until it's safe to do so. Had code residing on a
Soekris embedded linux board that acted as a bridge between the MHS and
the Micrologix 1100 PLC tied to the elevator control lines. Contributed
to the design of the 3-floor mezzanine/VRC, and after its construction,
was the sole software-team developer getting it fully operational.
Gotuit Media - Woburn, Massachusetts
Principal Software Engineer
November 2005 - November 2007
Skills used: Java (J2ME, CLDC 1.1, MIDP 2.0, JSR-135), C, PowerTV settop box
environment, SARA (Scientific-Atlanta Resident Application),
BlueStreak (Adobe Flash/ActionScript-based STB platform)
- Single-handedly brought Gotuit Media from a two-medium (broadband and
cable) company to a three-medium company, by prototyping, demonstrating,
designing, and then implementing Gotuit Media's J2ME-based mobile video
platform. The first commercial application of this was the
NFL Fantasy Football Video product for Sprint (see below), which was
followed up by a more generic customer-customizable mobile video player
for large customers of Gotuit Media.
- Designed and implemented Gotuit's NFL Fantasy Football Video product
("NFL Fantasy Video") for Sprint, which sold during the 2006 and 2007
football seasons. Team lead for three other developers and one QA
engineer, implementing our J2ME JSR-135 video-playing midlet for 12 EVDO
phones in 2006 (Samsung A900, A920, A940, A960, M500, Sanyo 7500, 8400,
9000, Motorola SLVR L7c, KRZR K1m, RAZR V3m, and LG Fusic) and an
additional 4 phones in 2007 (Samsung M610 Glint, M620 UpStage, M510,
and LG Muziq). The midlet (which sold for $6 per month) allowed users
to pick their fantasy team of any players in the NFL, and each week see
all of their player's plays from actual games via streaming video to
the phone. Users were also allowed to go all the way back to the beginning
of the season and see any plays by any individual player in the NFL.
In addition to being responsible for most of the implementation of the
product and for the directing of other team members, other
responsibilities included scheduling, coordinating with production,
discussions with Sprint, marketing, and kitting.
- Designed and implemented a generic "white-label" mobile player app, to
allow mobile access to Gotuit's customers' video assets. This midlet was
easily adaptable to individual customer demands, and formed the mobile
portion of the Gotuit video platform. Successful demonstrations were done
using this white-label app for EMI, UMG, and Sports Illustrated.
Device support for the white-label app included all of the phones mentioned
above, plus the Sanyo M1, Sanyo 8500 (Katana DLX), Motorola ic902,
Motorola RAZR2 V9m, and AT&T's Samsung a707.
- Team lead for the project of implementing our indexed video-on-demand
client for the BlueStreak platform, including the user interface and
negotiation with the backend to stream the correct video from the correct
location. Implemented this on a BlueStreak implementation running on
top of the SARA (Scientific-Atlanta Resident Application) platform.
New client code incorporated all of the functionality of the original
"vanilla" VOD client, plus features from our Fantasy Football
implementation.
- Created GDMS (Gotuit Digital Media Server) daemon to serve up multi-streamed
MPEG video. This is used by our DNCS to create the barker channels that our
settop box client code uses for the upper right corner of the screen for
each channel.
SavaJe Technologies - Chelmsford, Massachusetts
Member of Technical Staff
April 2004 - September 2005
Skills used: Embedded device work (low memory considerations, power management,
thread management, maintaining small footprint, etc.),
Java (J2SE, J2ME), C++, C, JNI, NMI, Swing, Ant, BitKeeper, CVS,
JUnit, Midlets, Xlets
- Kernel developer for SavaJeOS, SavaJe's cellphone-based Java operating
system (implemented in Java, C++, and C). Platforms included multiple phone
prototypes, test boards, Compaq IPAQ Pocket PCs, and an emulator.
- Team lead for Browser group. Designed and implemented device layer
to integrate 3rd party C++-based browser (OpenWave v7.0) into SavaJeOS.
Work included implementing support for networking, key input,
a framebuffer-driven display, timing, font support, memory management,
file I/O, settings-store, threading, startup/shutdown, content plugins,
focus management, and input methods (T9).
- Created browser service to maintain the lifecycle of OpenWave's
environment, handle error-recovery, and to provide a common way for other
parts of SavaJeOS to interact with the browser. Communication between this
Java-based service and our C/C++-based devlayer implementation involved
heavy use of NMI, worker threads, and a careful examination of the call
path to ensure that SavaJeOS limitations were met (such as not calling
back into Java from a thread started from C).
- Replaced SavaJe's homegrown media engine with 3rd party media
engine from PacketVideo Corp. Implemented PacketVideo's OSCL (OS
Compatibility Layer) for SavaJeOS, providing support for the display,
timing, memory management, threading, logging, network, and JSR-135.
This work provided the ability to playback, stream, and author MPEG-4
(video: H.263, audio: AAC/AMR).
- Repeatedly performed extensive code optimization to ensure that SavaJeOS's
kernel still fit onto all target devices after each major increase to
its size from 3rd party additions. This included compiler optimizations,
switching to a thumb-based compiler, the creation of glue layers to reuse
redundant libraries shared by SavaJeOS and third-party code, and the
repartitioning of persistent flash memory. This work resulted in SavaJe's
ability to continue to function on a 32-meg flash phone instead of the
competition's move to 64-meg phones, saving the customer huge costs in
increased hardware per phone.
- Implemented memory management checks such that the browser and
the media player could, if not in use, be terminated by the OS when memory
use was at a critical level. Ensured that common memory-critical cases
performed as designed, such as browsing while playing music with the
media player running in the background.
- Wrote the Digital Rights Management user interface application
for SavaJeOS, which handles OMA rights-object delivery cases such as
separate delivery (super syndication), combined delivery, forward-lock,
and the arrival of rights objects for Midlets.
- Enhanced SavaJe's flash-update utility to allow the installation
of the kernel, resources, platform compatibility layer, and applications
all at once.
- Implemented support for provisioning dialogs to be displayed when
data is provisioned to a user's phone. Tested these extensively via use of
NowSMS to send provisioning data wirelessly to phone.
- Implemented several additional OpenWave support applications,
including the Root application to handle browser termination, a Proxy
application to handle suspend/focus issues, another Proxy application for
content handling (so SavaJe applications can handle arbitrary mimetypes
via the browser), and a special-case scheme service to pass rstp:// URLs
to the media player for streaming.
- Wrote company build script to automate individual and nightly builds.
- SavaJe builds had previously pulled in libraries via scp, requiring
a network connection. Wrote cached_scp script that cached the results
of successful fetches, so that when a network wasn't present it could
use the cached copy of the library. Performed remote and local md5
checksum first to eliminate unnecessary network traffic and ensure
the correct version was always used, failing if the correct version
wasn't available. This permitted developers to do builds while traveling.
Author of Hacking TiVo: The Expansion, Enhancement, and
Development Starter Kit
(Wiley Publishing, 550 pages, ISBN: 0-7645-4336-9)
December 2002 - September 2003
Skills used: Writing, reverse engineering, development, Tcl,
cross-compiling
- Number 5 bestseller (overall) on amazon.com for 3 days
- Number 1 in Computers/Internet category on amazon.com for over 6 weeks
AltaVista Business Solutions - Andover, Massachusetts
Principal Software Engineer
December 2000 - December 2002
Skills used: Java (J2EE), C++, C, Swing, JavaScript, XML/XSLT, JMX, JSP,
SOAP, ClearCase, Ant, JUnit, SQL, JDBC, HTML
- Owner of the Management User Interface portion of the AltaVista
Enterprise Search product.
- After a year of work with the existing codebase, designed and
implemented new architecture using servlets, XML, XSLT, and JMX.
This new architecture allowed for easy internationalization of the
UI, customer-modification of the look and feel of the UI via XSLT
stylesheets, and seamless management of all components (including
custom-written plug-inable components) without needing any changes
to the code for the UI.
- Prior to this rearchitecture, maintained and added features to the
previous Java/Swing/Applet-based codebase.
- Heavily contributed to creation of MailScooter crawler, which
fetches mail to be indexed from mail servers. This contribution
consisted of writing the entire UI, as well as some of the core of
MailScooter.
- Implemented solutions for TicketMaster and Monster.com, both
resulting in sales (of $1.2 million and $1.4 million,
respectively). This lead position involved interfacing with the
customer, writing custom code for their needs, and stress-testing
the solution.
- Redesigned existing build system from ground up, using Jakarta Ant
instead of make. Orchestrated move of components from old
directory/package structure into this new system. During the
process, added such things as JavaDoc automation, XML validating
of any XML files in clearcase, and new kitting commands.
CMGI (InfoMation, PlanetDirect / MyWay, CMGI Solutions) - Burlington
(& Andover), Massachusetts
Principal Software Engineer
August 1997 - December 2000
Skills used: C++, RogueWave, perl, JavaScript, XML/XSLT, HTML, ClearCase,
WML, HDML
- Designed and implemented modular, hierarchical layout engine
allowing the caching, display, and layout of XML and other generic
objects. This included creation of several content sources to feed
this system, including several customer-specific formats. This
engine (OMS) later became much of the basis for CMGI Solutions'
main product, SolutionsPort.
- Enhanced this system (OMS, Object Module System) in its intended
direction - to be able to support any format with the same data.
Implemented stylesheets to support WML devices (Nokia and UP
browsers) as well as Palm 7, AvantGo, and VRML browsers. The same
data could then be accessed and edited via web browsers, cell
phones, etc.
- Designed customsearch module allowing custom searches of data
scraped from the web, converted into XML, and then into the
desired format. One example of this allowed a cellphone user to
search switchboard.com from their WML browser, and get the results
back in WML, all from regular expressions that converted the
original HTML into XML, and stylesheets that converted the XML
into WML.
- Did extensive work integrating external products into
OMS/SolutionsPort, including Mailspinner, Echo, and external feeds.
- Developed Email Notification system delivering personalized
internet/wirefeed news from InfoMation's flagship knowledge
management product, Echo.
- Created Filebot feed agent to retrieve files from network drives
and convert them for filtering and display through Echo. Worked
extensively with OEM partners in this endeavor
- Did extensive work on WebRobot feed agent that gathers documents
from the web for filtering and display through Echo
- Worked on Filterbot filtering agent that identifies which
articles/documents match users' queries
- Implemented many feature requests for new functionality to all
aspects of Echo including the ability to change presentation
styles of fetched documents, the ability to specify more
accurately which information is desired by the user, and an
overhaul of the administration interface
- Major participant in the rewriting of the entire UI, including
moving much of the implementation of the UI from server-side
C++-based CGIs into client-side JavaScript for faster performance
NetScheme Solutions, Inc. - Marlborough, Massachusetts
Principal Member Technical Staff
January 1997 - May 1997
Skills used: Java, SQL, ODBC, JavaScript, HTML, Tcl
- Created Tcl-based scriptable test harness tool that issues form
queries via HTTP, then issues corresponding ODBC queries for the
same data, and compares the results to verify the accuracy of
DataSite's navigation server.
- Integrated third party Java charting package into DataSite.
- Wrote several tools to exercise various components of the
DataSite product line.
Carberry Technology / Electronic Book Technologies - Lowell,
Massachusetts
Senior Software Engineer
June 1992 - November 1996
Skills used: C++ (Unix, Visual C++, and Mac), MFC, Netscape plugin-API,
perl, AppleScript, firmware design, X11/Motif, HTML
- Primary developer of FIGleaf Inline, a commercial Netscape Plug-In
with over 64,000 registered users (Windows 95/NT, Solaris, SunOS,
and IRIX). Demonstrated this at the first Netscape Developer's
conference (March 1996).
- Developed a web-based customer-registration/product-information
system to track evaluations, purchases, and general usage of
Carberry products.
- Developer of CADleaf Thumbnails (CLBrowse) product for Windows
95/NT.
- Wrote Calcomp PCI and Versatec plotter drivers for the CADleaf
product line.
- Wrote several additions to the Windows 95/NT, Motif/X versions of
CADleaf REDliner.
- Ported the complete set of Zoom View libraries to the Macintosh
platform. These make up the core graphics engine of Carberry's
product line as it relates to displaying, redlining, and editing
of images.
- Developed CL-Trans graphics translation package for Macintosh.
Center for Productivity Enhancement - Lowell, Massachusetts
Project Manager
October 1988 - June 1992
- Produced in-house Macintosh front end to Factory Simulation
project
- Project Manager for MASE (Management and Security Expert) system
based on CLIPS
- Performed systems and network maintenance on all Macintosh and
Unix systems
|