So, yeah. And it's really exciting because in 2015 Patricia Garcia gave a talk here in Berlin at JSConf on offline first data and I wasn't there for it, but I saw it a year or two later on YouTube. And kind of figured out where her LinkedIn profile was. There was no job, I found her GitHub profile and a link to a repo at Justin's organization which is field intelligence where I work now. So, I emailed Justin and two weeks later I was on a plane to Nigeria and now Patricia and I are colleagues and I'm speaking here about offline first stuff.
So, what we do is in trying to deliver health care commodities where it's a bit tougher to deliver them. Expanding access with partnering with federal governments and with down to small pharmacy traders. And doing that with software. Supply chain software.
So, we're headquartered in Abuja where we work. It's in the middle just below the red part in Nigeria.
And this map is population. And kind of one of the reasons is why Nigeria is because it's huge, it's growing like crazy. It has a massive market and it's about one sixth of the continent. Like 200 million people.
We're a small team of software developers, operations people. We have an office this. A small one here in Berlin and then a couple operations offices in Lagos and Nairobi. So, three years ago, four years ago when the company started, you kind of asked the question, okay, what tech stack are we gonna use for pharmacy supply chain management? You can buy one. You can use one that's already built.
There's a lot of them. You can customize on top of a development platform like Oracle or an SAP. A lot of warehouses and places in Nigeria do this. Or you do what a lot of large companies do, a lot of large tech companies and Fortune 500 companies where they just build their own.
If you're going to do that and it's 2015 or 2016, you're going to choose a boring but sensible stack. Maybe C#.com, or Jenga with Python or Ruby on Rails. Something with a postgres or a relational background. Which works really nice with enterprise resource planning.
You get a lot of stuff out of the box, a lot of tools that can really help you. But for us, our requirements were a lot different than that.
And the contrast that I kind of like to invent, people talk about these cities in different ways. But I like to think of Abuja and Lagos as the places we need to operate. Lagos is a megacity, it's like a country, 20 million people. It has tons of growth, tons of money, tons of opportunities. And then it also has networks that sometimes work and sometimes don't work.
You're a small pharmacy. You connect to the Internet on a cell tower. There's tons of cell towers. You see them everywhere, but then there's a traffic jam right outside of your business and suddenly 10,000 people are trying to use the same cell tower and you don't have network. But you need to keep working.
And then Abuja is a planned administrative city of just like a million people. And I always have Internet there. People don't carry cash. They pay with bank cards.
Our partners at the federal government have this giant command center room that has network. It has HDTVs. It has a rotating web camera that follows you as you talk, and they need to see what's going on in the whole country. So, we want to make something that's better than just a traditional boring web application.
We need something that can work in both of these places. That means doing something that's offline capable.
And offline is not binary. It's no it's an offline capable app or an online app. It looks crazy on the screen. You have a lot of different options.
The first one is with a we call small offline, these are names we made up for the categories. It's a marketing gimmick where a lot of webbased apps use where they say they're offline capable. And people who aren't software developers don't know what this means. We know, you're working on an issue ticket in GitHub and lose network connectivity, you can keep typing. The data will stay there.
Maybe it's thrown into local storage. But you don't want to refresh or click around. We couldn't use this model. It's not gonna work for us.
Medium offline, our word for just using a native app, an iPhone app, an Android app. Or writing for Windows or Mac OS is a lot more attractive. And I've done work on apps like that before for pharmacy supply chain. Our company has as well.
It's not something we're necessarily going to rule out in the future. But it is still very manual.
There's question marks for how offline it is. Because it totally depends on you as a developer. What kind of database are you working with? What kind of rules are you setting up and how are you syncing with remote? You pay to have offline features in consumer apps like Spotify and like Duolingo with offline features. Duolingo works well, but Spotify, every time I get on a plane, it forgets what I said to download offline.
Big offline is important to think about. This is how hospital IT systems work. Most of them historically. This is changing.
But what you've had over the past couple days is excuse me like a very large onsite deployment of servers. Even a server warehouse.
You have IT staff. You have a local area network. And it's because clinicians need their software so work offline. It's not like offline is new to tech companies in this industry.
But as I said, like that's not gonna work for us. We can't put a server in like right now we're working with about 30,000 pharmacies and there's hundreds of thousands to plan for. So, we went with the web. With offlinefirst in the web. It gets us kind of the things we want.
Low IT support. You can do the whole thing offline, but it's still the Internet. It's still syncing.
And there's a lot of talks on which distributed database to use. Like the basic setup for an offline app, web app, is you put a service worker in. You tell it to cache the static application. And then you work with local storage on the browser.
Which has become a lot more attractive over the last several years. You get to use a lot of the user's available disk space, a percentage of it. But how do you work with the inbrowser database and what do you replicate with? And CouchDB and PouchDB which some people at this conference helped build are kind of like the tools that most people end up using.
We're really happy with the area that they cover for us. If you know of some cool tool that somebody has, or like this question that I often get is like, oh, why don't you write your own supercool like replicating thing that uses blockchain and Kafka and is eventsourced? And the thing is the problem is never in the actual replication protocol. We don't have engineers sitting around wasting time trying to figure out what steps went wrong. If there's a little bit of network, the documents sync. So, these two have been really useful for that.
Like somebody gave a talk a while ago where they said friends don't let friends build their own replication protocols. Gregor said this. Yeah, all of our problems are elsewhere. We have our stack.
We're sticking with it. And like what could go wrong with this somewhat nontraditional application? And for those of you who have maybe done work in like global public health, you know it's not actually this very well clearly administered highly educated system of people with clear incentives. Like it's constantly in a state of emergency.
Whatever the current situation is, or however the current administrations are working, there's just this constant need to build things quickly and make things happen. And very sort of unclear incentives. It's not like a market economy that's just working off of a bottom line dollar. So, we started hitting like a lot of problems, right away.
Even though all of us had experience in building these kinds of applications. We were building them at a very much bigger scale than what we had done before. And the first problem you hit right away is what do you sync?
You have to segment the data somehow. You can't I mean, we did. We started by just giving all of the data to every client. We say like, you get the whole database. But then over time you need to start segmenting on something.
Because if it's all going to get stored in the user's browser, doesn't matter how much state they have, the data is growing linearly over time. You get hundreds of thousands of documents and reports and shipments. You can't just tell the user, take everything.
And the way you segment data using these databases, using CouchDB, is you set up a partition per a user. You say, this user is gonna get these documents. And the way you get it from the main database is you just sync them. So, your rules for setting this up are just entirely custom code.
Your it's entirely up to you. So, you have to take into consideration, all right. What are our rules? We need to, you know, somebody in delta state should not be seeing the same data as somebody in Lagos state.
Okay. Geography. Next, time. Timebased data becomes a really big issue because you need to sync some of it, but not all of it. But then this comes into the like domain model of what is okay to cut off? Because, like, to calculate how much stuff you have at a store, or how much money you have in your bank account, it's usually like, what was the opening transaction? And then let's add all of the debits and credits over time until I get the current balance.
But if I only sync your last ten transactions, that's not gonna give you the right data to get the right end number. So, you look at what to sync as a developer and you have all of these different choices. The other thing that I'm not gonna spend a lot of time on is that with supply chain management you have really crazy access rules. You have tons of dynamic lists of whos allowed to see which sponsored commodities at which certain locations two years ago at this month. So, there's not like a ton of, yeah, really clear ways to model your data.
Next, is network storage. And this is kind of exciting. Normally in an application you have a database that you're working with. And so, your code to access the data is just in one place.
And you try and keep it abstracted. Maybe someday you're gonna replatform and you're gonna choose a different database. But typically, you're working with one type of data store.
If we had gone the traditional web framework application. And with online/offline, you have data that can be coming from the index DB browser database through Pouch. You have data that could just be like inmemory excuse me cache. You have requests from the browser to get remote data that you don't have locally. And then you have backend serverless functions that are also talking to the same database that need to do similar business logic.
So, developers don't really have a framework for where we're busy, we need to throw our code somewhere. If this was Rails, I'd have an endpoint, have a new table, set up a new entity and put the code under it. And even if it's messy, everybody in the organization is used to that pattern. But for us, it was just like, put it wherever you want. Let's put it in the lambda.
Let's put it in the backend script. Let's put it, you know, in the frontend. And over time you don't have like an isolated place because you're working with different network storage adapters.
The biggest one of the bigger problems that isn't just us on the frontend doing webbased applications is modelling JSON. Modelling sort of without a relational database. And there isn't, as far as we know so far, like there isn't one clear way to do this that always wins. There's a lot of a lot of different strategies to deal with the problems.
So, normalization. Like you want this table and this table and this table to be neatly separated and you have foreign keys between them.
Like, this is a database. That's awesome. That's what you want. And so, you're a developer and you're like, I'm gonna do this in JSON and create a document and a document and a document. And then to get this document and to join on them, you're making HTTP requests over a network because you don't know what the relation is yet.
So, then you say, oh, okay, that's too much. I'm gonna put everything in the JSON document. And then a user says, hey, we need to change the name of this item. Okay, let me do a bulk update on half a gig of data and hope there aren't any conflicts. It's not always clear what the right strategy is for this.
Sometimes it helps to denormalize, sometimes to normalize. And topics, Patricia's talk had cool ideas for strategies to use multiple documents to have two users working on the same entity at the same time without causing conflicts. Yeah.
And then the other one is like around what you sync to who. So, like kind of access stuff. You might have a list of pharmacy commodities that you need to send to a supplier. But it turns out there's two suppliers. One for cold chain commodities, one for noncold chain commodities.
You need to split the document. It's a business rule that impacts how you're modelling the JSON. When a user says, hey, I need to make a new type of database people, Couch, people, what do you do? Really depends on what you're doing.
It makes having a single framework for everything difficult. So, deciding how to segment data and what to sync to users, we created this thing we call an ID dispenser. And it's just a remote endpoint that when a user starts, they initialize the application, we have the browser make a call to a lambda. And the lambda takes the user, who it is, what their location is, what programs they have access to, and it goes, and it talks to CouchDB and it says, hey, what documents does this user need? Couch says, okay, based on your business logic, it's X, Y and Z.
Lambda says thanks. Sends it to the browser and the user has a subset to the data they get to use.
As an aside, this has so far been using us. But why going to use infinitely scalable serverless functions on behalf of clients, remember that other resources of yours are not infinitely scalable and you might DDoS your own database.
So, yeah. This is really exciting. Like what we've started to do in our APIs with this problem of sometimes your network storage adapter is a local database, sometimes a remote database, sometimes you're in Node. Is having APIs that have all of that defined in them per entity.
It's one API that we'll use in all of our different places in lambdas, in the backend and on the frontend, in scripts. And your API can know what kind of network storage adapter you're using.
So, you have situations where you ask, hey, give me all of the reports. And the API goes, okay. Cool. I'm looking in the database. I have found three months worth of reports.
And that's what I know about that's offline. And the user says, but I want to page back further than that. This isn't fair that I should only see what's offline. And the API at that point says, I'm going to switch network storage adapters and I'm going to go fetch the remote docs and give that to you.
And then a lot of this ties back into like how do you design this kind of application for the user? You have to tell the user what's going on. If they look for a report on their facility, some facility that's remote. Maybe they don't submit reports that often and they want to know, has this report been submitted? And they're working offline. The ID dispenser told them, you should know about this range of documents. You should have these offline.
If you go and check and the document's not there, display to the user, this thing doesn't exist. I don't care about what's on the remote. I know that this document was just never submitted in the first place. Or that it's there.
Then if you're going too far back to a segment of the data that we can't sync, something that's much older. We tell the user and we display in the UI like, this has got to be an online resource. This is not something you have offline. You have the report, you make the remote call, yep we don't know about that report. Or yep, here is the report, display it.
If you're offline, you have to display to the user, hey, we don't know. We're offline. We don't know if this report exists or if it doesn't exist.
So, yeah. The last thing is some pictures where this is like a remote clinic that has a pharmacy. And you can see there's like a little bit of VSAT which is for satellite Internet. And this clinic had pretty good Internet at the time.
It was like 2014 and they had like five, 12 kilobits a second down. That was fine. They could have an application on a laptop and dispense point of care for the patients.
The patients come up, still for the supply chain, not necessarily for the clinicians. You would track the movement of pharmaceuticals electronically. Designs for this place, hey, we'll just use a network. We'll just really make sure that VSAT always works.
We'll talk to the IT team. We'll be really serious about it.
But that's not the case. This VSAT would very frequently get slightly misaligned and the clinic would lose Internet. And what's really kind of the point of using this somewhat difficult and nontraditional application is that for that pharmacy, like, it didn't matter. The application worked fine.
It continued to work. They could create patients, they could create dispenses, they could look up their stock.
They could transfer stock internally. And then oh, I could show the pharmacy. The pharmacy's like that one in the back. They're all red and look the same.
But the cool part that was when a shipment would come once a month from the central warehouse, what would you do? You would have to manually enter that like you're using an offline system. And then also worse is that you have hundreds and hundreds of transactions daily about really important commodities that the central team needs to plan and think about. That's why they asked us programmers to build something.
So, if you're not gonna send that data back home, then it's like, why are you, you know, why are you tracking it in the first place? Unless it's something later, some analysis or something which is what a lot of systems end up being used for. So, what the team would do is they would walk to this river. And I like this photo because it looks really nice. But this river was like really directly behind the pharmacy.
It's like just less than a kilometer. And there's a cell tower over there now. That when you're at the river and you have a cell phone and you're tethering to your laptop, you have fine connectivity.
And they didn't really do this every day, they didn't need to. But they could just go over there whenever a shipment would come. Learn from the remote about the shipment, update their inventory levels and sync to the central server. These are the commodities we dispensed over the last several weeks.
And the central team said cool, they need more of this, good on that and we can know what to do next. Even though their VSAT is misaligned and the IT staff it busy.
And at times we many times we will kind of look at each other and be like, what are we even doing? Why don't we just make a Rails app or a Django app? This is so hard. But ultimately it means an application that is more like robust. Like it can get us more data and it can work in the places we want to work in.
So, even though like a year or two ago it was looking pretty scary, there's been a lot of learnings from us about stuff we've done wrong that have made it possible to kind of safely start delivering this type of application at a pretty large scale.
So, I'm out of time. But thanks very much, guys, for your time. And happy to talk here today.
[ Applause ]