4 min read

Managing Research vs Engineering

Don't manage AI or research they way we'd manage shipping features.

🔸
Improving AI systems leans towards research and exploration of ideas.
Research-y vs engineering projects have to be managed distinctively, where the former benefits from freedom, iterations, while aligning on higher-level objectives, and product engineering from structure, rigor, and clarity of goals.

Recently, in a conversation with a colleague, I realized the nuances of managing research vs engineering projects have not been explored by some of my colleagues. As I've had experiences with research, data science, and engineering projects, I thought of writing this note to capture some of my learnings.

Lets consider three functions in software industry: a) Research, b) Data Science, and c) Engineering. The goal of these three functions is quite different, and how management expects business outcomes from each varies by organization.

First, the goals:

  1. Goal of research and science is to improve net-new knowledge and understanding of a domain so that better actions can be taken to drive a desired outcome. In some cases, the goal revolves around longer-term effects such as becoming a talent-magnet or gaining credebility in a field.
    Their impact is typically measured in 4-8 years horizon.

  2. Goal of Data Science function is a combination of two organizational philosophies

a) Growth-centric organizations use Data-Science to improve net-new knowlege about an ecosystem (product, market), and are accountable for co-developing a growth strategy. Their impact is measured in 1-2 years horizon, enough for execution of the strategy.

b) Product-centric organizations use Data-Scientists as ML engineers - that is, to operationalize existing strategy involving prediction models.

  1. Goal of engineering is to apply known knowledge to solving existing problems. They operationalize and productize a well-defined strategy. Their impact is measured in as little as 6-months and increases with seniority.

With these goals in mind, we can reason about differences in developing and refining goals, processes and metrics to ensure proper execution, establish discipline, and exert control to keep progress on-track.

My core insight is that research and science projects are highly uncertain in outcome, yet parallelizable.

Reward and Accountability

It is a matter of reward philosophy as well as seniority that decides balancing execution and business outcome.

A team with repeatedly good execution yet mediocre business-impact inherited bad decisions in the first place. A team shipping high-impact features without execution rigor will be unlikely to repeat its success. While the discussion of these tradeoffs merit its own post, it is my experience that most successful management embodies a component of both aspects into their reward system.

Recommendation: Research's business impact should be evaluated (and rewarded) on a longer time horizon. Whereas, once strategy has been decided, execution should be the main focus of near-term incentive design.

Processes

Making progress on research, science, and ML-modeling requires highly iterative processes, with a culture that encourages feedback loops, learning, and revisiting the path. Pure engineering, on the other hand, benefits from planning and structure, following agile, sprint-based models.

Lets take an example:

We were planning to introduce short video recommendations in a popular social media app (much like tiktok).

The engineering tasks were clear - develop the frontend UX, its interaction and contracts with backend, improve latency through prefetching, reduce memory footprint, and on the backend side: real-time indexing, serving backend, realtime ingestion and extraction systems, and relevancy mechanics. All known techniques to scale and execution benefited from clear structure.

The data-science team was tasked with much more ambiguous goals: find problems engineers should solve for.  Where and to whom do we introduce this product and what would be downstream second-order impact on overall content? How do we make tradeoffs between longer session and retention? How do we measure user-satisfaction / dissatisfaction of the content we surface and what are really strong feedback signals to amplify? The process for managing these tasks were highly iterative with weekly reviews, which often led to non-linear paths being pursued.

The ML Research team was presented with longer-term goals. We knew this was an area company would want long term investments in, so they were tasked with coming up with better algorithms and breakthroughs in state-of-the art in Video Understanding. The process required high degree of freedom given to the researchers. While we had multiple ideas being pursued in parallel, what we did solidify is success criteria.

In the end, while engineers found immediate solutions, researchers were able to reframe the problem all together, and produce ideas that shaped how multiple products worked within the company.

Recommendation: While engineering benefits from planning and structure, research and science benefit from broad-stroke iterations - concrete sequence of goals that build upon each other, but freedom to adjust within them (lack of structure). The way to mitigate uncertainties in research goals is to parallelize potential threads.

Control

An important aspect of management is identifying how and when to exert control. But first we need to draw clear understanding of who owns which risk factors. Does management own the direction and team the execution? What about ambiguous science and research where direction is not clearly set?

Recommendation: Research is least amenable to control as they inherit the risk-factors. It is important to align on business goals, success criteria and time horizon, however exerting control should be restricted to budget and personnel, not individual tasks.

Learning  

I strongly believe growth comes from introspecting and internalizing learnings.

Here, I don't see the need to differentiate between functions, however, I have observed natural tendencies in research teams to favor a learning process.

Recommendation: Build a culture of introspection, learning and sharing, and being open and vulnerable, while providing a safe environment to fail and get-back-up.