Stories about Software


My Experience at DevOps World, 2023: Empowering Enterprise Engineers

Editorial note: This post originally appeared on the DevOps World blog, arising from me attending that event as “press.”  I’m going to be at the next one, this Thursday, in Silicon Valley.  If you’re in the area and able to make it, let me know and we can have lunch.)

I have a renewed sense of hope for the condition of the enterprise software developer.

For anyone familiar with me or my career, this is quite a statement. For the rest of you, to whom I’m some internet rando, I won’t bore you with more details about me than are absolutely necessary to understand my take. Suffice it to say that in 2017, I wrote a book whose central thesis was that software developers should initiate an exodus from the enterprise.

Fast forward 6 years to now, when I just spent the day as “press” at DevOps World, listening to a series of talks and interacting with participants and sponsors. Based on what I’ve seen here, my outlook for the future of enterprise developers is far less bleak than it was back then.

I’ll dive into the specifics of why, but the overarching theme is that it seems organizational change in the enterprise has evolved from a long history of something being done to software developers into something being done for them.

At this conference, I heard and saw a lot of innovation aimed at earning developer buy-in, relieving their toil, and sending them back to their various enterprises with pragmatic, actionable ways to improve their situations.

Where It Started: The Endless Agile Transformation

One more biographical detail, and then I’m done. I promise.

In the mid-aughts, starting about 10 years ago and running up until I wrote my book, I earned my living as a management consultant in software, almost exclusively in the enterprise. I specialized in static code analysis for strategic decisions, but often found myself in enterprises, helping them answer the basic question, “why is our agile transformation not working?”

Back then, DevOps was scarcely on the radar for enterprises. Instead, everyone was doing a so-called “agile transformation,” which generally meant having a different, Scrum-flavored series of meetings and changing very little else. These firms were usually in year 6 of a 2 year transformation, and were still working on the “define what agile means” OKR.

The Bad Old Days of Shifting Burdens to the Left

If that sounds cynical, fair enough. But ask anyone in those orgs at the time, and they’ll almost certainly laugh ruefully and call it accurate.

One of the day’s speakers, Thomas Haver, referred to these sorts of institutions as “technology museums,” and I can’t think of a better turn of phrase. When you’re gluing Perl scripts to mainframes as part of some internal line of business initiative, you’re not using the latest and greatest, and you’re probably not knocking out that agile transformation any time soon.

But I think the central problem that engineering groups in these organizations encountered back then was neither the comically old technology nor the glacial pace of organizational change, per se. Rather, developers found themselves buried under the crushing weight of endless todo lists, as their employers heaped an ever-growing mountain of burdens on them.

Organizational Change Done To Developers

I can remember some interesting characters that I met during my consulting travels:

  • The “schema specialist,” who did nothing but review developer-proposed changes to any database, anywhere, and give them notes on what to do instead.
  • The “environment administrator,” who did nothing but review (and usually reject) requests to move JARs from one non-prod environment to another, and make sure the proper digital paperwork had been submitted.
  • The “integration architect,” whose contributions I never managed to figure out, but could reject seemingly anything that created an interface between two dev teams.
  • The “demo guy,” who engineers would have to work with to create power points for their sprint demos (instead of the customary “working software”).

I could go on, but I’m trying to make a point, not supply fodder for xkcd. And my point is that everything these organizations did heaped work on the engineers, including the creation of roles that seem like they should theoretically specialize work away from those same engineers.

To earn a living as an enterprise developer in the age of the agile transformation was to be an endless collector of bureaucratic toil. Write your software, submit compliant schema requests, fill out the JAR movement form in triplicate, please the integration architect, and make sure the power point guy knows that you’ve been delivering enough story points.

By the end of my consulting years, I hit peak cynicism. I thought developers should leave and not tolerate this situation, and I thought enterprises should give up on creating software and delegate that activity to vendors and future acquisition targets.

I’m pleased to see that things look different than when I left consulting 6 years ago.

Moving to Organizational Change For Developers

DevOps World had a common, refreshing theme. Tracy Bannon captured it well during a discussion panel: “The idea of shift left is not ‘dump it on the development team’.”

In opening remarks for the event, CloudBees CEO Anuj Kapur pointed out that a poll of CloudBees customers found that developers only spend 30% of their time, well, developing. The rest is lost in some flavor of dark work or another.

Everyone at the event seemed to agree that this is a wholly unacceptable state of affairs, and that the path forward is one that involves a rethinking of what is asked of developers and what is done for them.

Removal of Toil

The first main event theme that I observed centered around eliminating toil. Rather than schema and integration specialists forcing checklists on the developers, conference participants explored the use of tooling and organizational tactics to eliminate as much toil as possible from the world of enterprise engineers.

In the conference keynote, “Go Big, Say Yes,” CloudBees announced product changes with a direct impact. They made CloudBeesCI highly available and highly scalable, meaning that organizations have immediate relief from problems such as bottlenecked jobs, needless time waiting for workspace downloads, and infrastructure-related build failures. All of this rolls up to a very simple value proposition: developers using their time instead of killing it, waiting for infrastructure.

When the engineers present saw the new Pipeline Explorer feature demonstrated, they burst into spontaneous applause, even though there was no break in the presentation. And the reason is obvious: they could see an end to loading some gigantic text file into an IDE so that they could search awkwardly for the reason a build bombed out. Pipeline Explorer lets them get there immediately, without the pain.

This idea wasn’t limited, either, to product announcements or even the talks. I watched the folks at Gradle, one of the sponsors, demonstrate a way to identify and troubleshoot flaky unit tests and the ability to use machine learning to prioritize executed tests based on changes to the codebase.

And Redgate did a demo of an offering that allowed source controlling database schema changes and keeping them consistent, in sync, and drift-free across environments. And all of this without a “schema specialist” in sight to scold them – just engineers safely changing the schema as needed.

Developer Buy-In

The engineer enablement theme wasn’t limited to tooling, either. One of the main takeaways that a recovering management consultant like myself couldn’t help but notice was the theme of getting buy-in to broader goals from the folks tasked with delivering them.

For instance, Katie Norton explained the concept of a software bill of materials (SBOM) and the broader concept of a software supply chain. Rather than a checklist of context-free “best practices,” this talk, aimed at an engineering audience, used practical analogies and diagrams to illustrate the challenges around and the gravity of managing the risk presented by using open source components.

This seems a lot better to me than how this risk was typically managed 10 years ago in enterprises: “don’t use open source.”

There was a talk that explained the idea of attestation and why it matters, encouraging teams to move compliance from a rote series of tasks to an objective, auditable and, most importantly, automated set of data. The mission was to avoid what presenter John Willis referred to as “governance theater,” wherein engineers would rename incidents to “cases” because of the reduced scrutiny that invited, or simply upload the same screenshot of a test run for 2 years to demonstrate that they had achieved code coverage.

Instead of tsk-tsking, this deceit was recounted with understanding. Of course teams did this when they were trying to ship features on time – it was the only way to get their work done. The developers here don’t need to change; the organization does.

Practical, Actionable Advice

Perhaps the most powerful motif of the event was that it distilled advanced practices and industry insights into easily actionable takeaways for participants.

For instance, speaker Julia Furst talked about the impact of generative AI on DevOps practices and suggested a series of tools that attendees could go and investigate. Ali Ravji and Mihir Vora from Capital One distilled hard-won experience building scalable CI/CD pipelines into concrete suggestions: focus on reusability, parallelization, and failure planning.

And, how’s this for practical and actionable? “Contribute to open source and adopt a plugin.”

Mark Waite, Senior Engineering Manager at CloudBees gave a really cool talk about how contributing to open source goes well beyond putting collateral good into the world and is actually good business. He relayed his experience building a CI solution for his XP team back in 2003, before Jenkins existed, and how that required a dedicated team member to maintain.

A much better solution, had it existed, would have been to contribute to Jenkins and use it. So he encouraged attendees to go contribute to open source to improve it, fix defects, and patch vulnerabilities that can help their organizations upstream, before things become expensive.

The Joy of Being Less Cynical

While I certainly enjoy a good, cathartic rant from time to time, it’s not exactly fun being cynical. Usually, it’s a sign that you need to take a vacation or start a different job.

So it was nice to circle back to the world of enterprise software development, after years away from it, and find it far more promising than I remember. It was nice to see an enterprise less focused on crushing engineers under the weight of endless compliance tasks and more focused on helping them ease that burden.

If schema specialists and build administrators must exist, at least they can do so in supporting roles and with powerful tools, instead of the promise of toil for their colleagues. DevOps World was a fun event, sure, but it was also a philosophical palette cleanser for me.

Incidentally, the next stop on the tour is in Silicon Valley October 18th and 19th, and I’m planning to attend that event as well. If you’re in the neighborhood and so inclined, grab yourself a pass and we can have lunch or a beer at the happy hour.



Developer Hegemony, Revisited (And A Free Copy, If You Like)

In the “time flies” category, it’s been over four years since I announced the release of Developer Hegemony.

So I suppose it’s old enough that I need to start giving it away for free, right?  Like the way really old books and classical music are somehow free?  I’m pretty sure that’s how it works, but, whatever, I don’t make the rules.

Anyway, I’ll come back to the “have the book for free” part and explain in more detail a little later.  In the meantime, I’ll ask you to indulge me in some musing and the announcement of a new community initiative that you’re welcome to join.

Developer Hegemony: The Idea in Brief

If you’re not familiar, or you need a refresher, Developer Hegemony was a book I started writing on Leanpub and eventually published to Amazon.  It was, dare I say, my magnum rantus. And I’m flattered and bemused to report that it has sold thousands of copies in the last four years, in spite of my haphazard-at-best marketing efforts post-launch.

I suspect this is because, like the expert beginner, the beggar CEO, or the broken interview, this content taps into a smoldering populist rage.  Developer Hegemony is a lengthy answer to the question, “Why are corporate software developers the least influential people in software development?”

Unpacking all of the themes of the book here would be impractical.  But the book includes a methodical takedown of traditional corporate institutions, and it encourages a programmer exodus from the ranks of large organizations.

We’d be better served going off on our own.  We could sell our services (or SaaS-es) as individual contractors or small bands of partners in firms that I described as “efficiencer” firms.

And after releasing the book, I had grand intentions of helping people do just that.


Read More


11 Realpolitik Career Tips for Junior Developers

If you’ve followed me for years (and you read the title), you’re probably thinking, “Erik, you hypocrite.”

But let’s not confuse everyone else with inside baseball just yet. There will be plenty of time to get into why I called them “junior developers” in spite of really disliking that term.

So give me the rope with which to hang myself, and stay tuned for my advice to those embarking on a programming career.

What This Post Is and Is Not

What I want to do here today is offer some tips. But if I just wrote a post called, “Tips for Junior Developers,” I’d be, by my non-scientific making up of a number, the 79,667th person to write a post with that title.

And those tips would include things like:

  • Be humble.
  • Keep a developer journal and write down your mistakes to learn from.
  • Read well-regarded books by prominent developers.
  • Learn communication skills

I’m sorry, I need to stop. No offense to people who have written these things (including probably me at times). But I’m boring myself to tears just typing out the strawman.

So I won’t write that post. I promise.

Instead, this post will have what readers of this blog and my book have come to think of as my personal spin on it, which generally ranges somewhere between hyper-cynical and coldly pragmatic, depending on your point of view.

If you’ve never read it, you might want to check out my definition of the corporate hierarchy, to understand what I mean when I describe people in organizations as pragmatists, idealists, and opportunists. That may prove helpful for perspective, since I’d characterize so-called “junior” developers (let’s say people with < 2 years industry experience) as idealists by definition.

Those formative 2 years will determine whether you remain an idealist, graduate to journeyman idealist, give up and become a pragmatist, or… well, let’s not worry about opportunists here.  The intersection of budding corporate opportunists and people looking for junior dev tips is probably the empty set.

Career-Savvy Tips for Junior Developers

If you’re embarking on a programming career, first of all, good for you.

Seriously. You’ve selected a path that will pay you handsomely and is, in my opinion, anyway, a lot of fun.  I always thought of professional programming as “people pay me to solve puzzles.”

As a so-called junior developer (or an aspiring one), you’ve come from one of two very broad paths:

  • Recent grad with a CS or related degree, looking for that first corporate job.
  • You have professional experience, but are making a career transition, perhaps with the aid of a bootcamp.

So you’re either standing outside the club, leaning against the velvet rope and peering eagerly at the movers and shakers within, or else the bouncer has just waved you in, and you’re telling yourself, “play it cool, play it cool!”

You’ve waited and worked for this moment. The club analogy trivializes it, because you don’t spend months or years waiting to get into the club.  (I mean, I don’t think so anyway — my days of going to anything called a “club” are so far in my rearview that I’d have to pull over for them to catch up.)

You’re grateful to have that first job, and to be welcomed into the society of real software developers.  I get it, I find your enthusiasm infectious, and I’m happy for you.

But don’t let this excitement take your perspective.  Because it will, and it does for most.

To understand what I mean, let’s get into the tips.

Read More


Live Blogging Developer Week, 2020, Day 2

Why the Dev Tools Market is Designed to Confuse You, 4:00 Pacific

This is a talk by Chris Riley, of Splunk.  And it starts off with a great hook, which is “dev tools market is really confusing.”

It’s so confusing, in fact, that here’s a screenshot of the tools available for you to choose from:

Technical founders are good at building tools to solve a problem, but not as good at forming a goto market strategy. The result is a never-ending proliferation of tools.

This is exacerbated by the fact that we, as techies, love to tinker. As Chris suggests in his slide, for a lot of dev teams, tools are “junk food.”  We should perhaps get back to dev tools as veggies.

Which Tool is Best?

How do you know if a tool is the best tool? Chris offers some guidance based on his real world experience as an industry analyst.

You can really know based on:

  • Continuity with other systems
  • It has an expected result/outcome
  • It has a hard and/or soft ROI.
  • You can know the total cost of ownership (TCO).

But sometimes the best tool is “the most okay tool.”

On the other hand, you can easily find yourself misled by:

  • Open source.  Companies may not realize that a commercial entity open-sources a part of their offering, but still has a commerical interest, subject to change. You also need to be aware of project activity, security, and following of OSS standard practices.
  • Politics.  If politics factors into the decision to acquire a tool, that’s an issue. For instance, if a new member of leadership is bringing in a tool from their past life or because they like someone there, you’re in a bad situation. Tools don’t solve people problems.
  • Fairy dust. Vendors know their tools very well, but not your environment. This creates an expectation that a tool will solve all of your problems easily or bring you into the world of the latest and greatest. Demos are framing — not exactly what will happen in your environment.

Stages of Tool Adoption

What does tool adoption look like? Well, you’ve got three conceptual stages:

  1. Vetting. Vetting should include multi-role review, testing for compatibility with other tools, admin testing, and support/success testing.
  2. Procurement. You need to evaluate price and value and not delegate technical decisions to people in a procurement department. And think about renewal and future engagements.
  3. Implementation. This is another multi-role concern, and don’t let anyone hoard the tool. Go the opposite route and encourage demos and learning around the tool.

Decision-Making Resources

If you’re looking for help choosing tools or making recommendations, here are some resources you can make use of.

  • Your Peers.
  • Stack Overflow.
  • Vendor Blogs/Documentation/Tutorials.
  • Adopt an Advocate — follow developer advocates that you like, know, and trust.
  • DevOps.com‘s market guide.
  • SlashData.co: state of developer survey.

My Takeaway

As someone who has spent time in a lot of roles in the IT organization, I can relate to this from every angle.  I have been (and, who am I kidding, probably remain) a developer demanding to have my favorite tool everywhere I go. I’ve also managed people clamoring for their favorite tools and found it counter-productive and tiring.

In tech, tools are important, of course. The right tools can be a game-changer and a competitive advantage. The wrong tools can sap the productivity right out of your organization. But what we often overlook are the superfluous tools and their cost.

In a world where we clearly can’t afford to wander around reinventing wheels, any engineering group needs to judiciously select tools. But we all have a mindset that drives us toward selecting them emotionally rather than judiciously. So companies need to develop a first class, objective framework for evaluating tools.

Search Fast Using First Principles, 3:00 Pacific

At the complete opposite end of the building, I arrived a little late to a talk by Steven Czerwinski, CTO for our friends at Scalyr.

This talk is a dive into how Scalyr handles search so quickly.  Scalyr is a tool created by engineers, for engineers, that aims at making search of log files super fast and powerful.

When you’re thinking about fast search, you’d probably think naturally of indexing.  Index is the default solution for search, in fact.

But as they were building Scalyr, they discovered that this wasn’t going to be the best way to go. It suffered from some problems:

  • Storing indexes on disk is expensive, from a disk perspective.
  • Code complexity of sync and the processing overhead of index maintenance.
  • Indexes aren’t good at wildcards… or regexes… or ranges… or summarizing billions of matches.

So, counterintuitively, there were challenges and compelling reasons not to index.  And so they opted not to go this route.

Brute Force FTW

Speaking of counter-intuitive, how about brute force?  This is what they went with, and it has it place.

Consider grep, for instance.  They used grep and its approach as the basis for cost-effective, high volume search, and they arrived at this approach by doing some math.

They set a goal of searching 1 day’s logs in 1 second for their customers. By starting with this goal, they were able to work backward from what mattered to customers and arrive at an approach.

Adapting with Users and Business Goals in Mind

This sort of reasoning helped them learn and make design decisions over the years. For instance, they’ve opted to use NoSQL, columnar database for storage. This approach is a lot more conducive to the searches that their customers typically do.

As they’ve evolved and scaled, they’ve kept a focus on what it is that their customers need.  This system performs very well on search queries that their customers will run, which is important.

It doesn’t do as well for running the same queries episodically.  So they have adapted and built a secondary system, instead, to answer known questions asked at a frequent interval. The gist of this approach is to index the queries, rather than the data.

My Takeaway

One of the things I find so cool about the approach and the business philosophy here is that it reflects a willingness to think without constraints. It’s easy to start any given project or venture with a lot of baked in assumptions — even when discarding them would open up a lot of creative possibilities.

“I’m going to solve this with brute force” isn’t just something you probably wouldn’t hear, but it might even invite scorn.  And yet, it turns out to be a really creative way to come up with an approach that serves as a good business model and provides clients with what they really want.

So while we should all build on a basis of prior knowledge and bear that in mind, we shouldn’t be mesmerized by it to the point where we fail to explore all possibilities.  I personally am something of a contrarian and iconoclast, so I disproportionately enjoy seeing this in action in the real world.

Commercializing a Major Open Source Project, 2:30 Pacific

This a talk by Andrew Roberts, CEO of Tiny, a popular open-source library for WYSIWYG/rich-content editing.  WordPress, for instance, has used it.

TinyMCE was a purely open source project from 2004 to 2015. At that point, Andrew got involved with the project creators and looked to commercialize the product.

Tiny itself was founded in 1999 and called E-Fox.  The company was reborn in 2015 as Tiny and raised some venture capital.  Today, they have an engineering team in Brisbane, the original founders in Sweden, and some folks in other places.

Now for some interesting open source lessons.

Lesson 1: Free is not a Business Model

Matt Assay said that “entrepreneurs shouldn’t try to monetize open source, ever.”  Tiny’s experience has offered a counter-example, but the point remains that it’s not easy to do it.

If you want to think about value propositions and how to make money, you can make use of a business model canvas.  This gives you a way to think about how you’ll make money, from whom, and more besides.  But, more importantly, it will let you brainstorm how to bring revenue into the system.

Lesson 2: Freemium is a Business Model

So how can you get revenue into the system?

Some people spend any amount of time to save money. Other people spend money to save time. It is that philosophical difference that makes the business model.

-Marten Mickos

Both types of people exist in businesses. You probably want those that will spend money to save time. By adding proprietary value to open source, you can target these folks.

To put it succinctly, proprietary value == the easy button.

But you really need to get the word out, because premium open source conversion rates are extremely low.  Look for a big community.

Lesson 3: Ask for the Money

Traction is the rate at which you turn non-paying users into paying customers.  This is difficult, so freemium companies employ a lot of tricks to gain traction.

  • Powered by messages
  • Email address collection
  • In-App messaging
  • Conversations

These things are tough to do and a lot of people shy away from them, but they’re necessary in order to gain traction.  Don’t hesitate to call for upgrading.

Lesson 4: Have Clarity around the Business You Want

Think of the business of your dreams, and ask yourself about things like revenue goals, subscriptions, customer dependency. etc. You need to think about what you can do over the course of time to make things sustainable for you and to be effective.

All of the different business disciplines will probably require a team, so think through the team size.

You’ll also want to think about the number of buyers that you’ll need and how much you’ll charge each of them. Will you have a few million dollar whale clients? Millions of few-dollar clients?

My Takeaway

I wasn’t sure what to expect, exactly, going in. I was just generally curious about the model of taking a free and/or open source product and finding a way to commercialize it. I have no intent to do this, myself, but I find it interesting.

And my takeaway from listening to Andrew is that it’s rewarding, but you really have to (1) have a good, solid plan and (2) understand what you’re getting yourself into. I get the sense that he probably talks to a lot of folks with naive ideas about turning a project they love into their living. And I could certainly see that being a path to tragedy and disillusionment.

But Andrew’s path to success seems to suggest that one absolutely exists if you do formulate a good plan and you do undergo something of a commercial mindset shift.

Finding Product Market Fit, 2:00 Pacific

(As an appropos of nothing aside, I find myself wondering at the convention I established to fully spell out the time zone after each H2. I can only conclude that I did something dumb and then just kept doubling down for no good reason.)

This talk is by Sara Mauskopf, co-founder of Winnie.  The entrepreneur in me couldn’t resist this, even though my main mission here was dev-centric talks.

Product-market fit is among the most important things for a founder to find. But how do you get there, and how do you know when you’re there?

She set out to build Winnie 4 years ago and took a winding path there.  So she’s going to recount her experience.

Product/market fit means bieng ina  good market with a product that can satisfy that market.

-Marc Andreesseen

In the Beginning: Tactic #1 Just Launch It

When they started with Winnie, the market they started with was parents. 90% of new parents were millennials and looking for info online.  Millennials are not just parents, but they’re also “spendy.”  Seems like a good market.

So, everything is easy, right? Just build a product for this market!

They started with what a lot of founders do: built an app for themselves. They built “Yelp for parents.”  It was information just for parents, and they thought this was a good idea.

They built it, launched it, and evaluated product-market fit.  And that’s tactic #1: just launch it.

A lot of people will dither and launch things in private beta and such. But they built it quickly, launched, and promoted it, getting featured in the app store. This got them a lot of users, which gave them the data to evaluate product market fit.

Up and to the Right, But Mistake #1

They did grow, but it was really hard. It required marketing stunts, growth hacks, and other non-scalable things. The growth wasn’t organic.

The mistake they made was to force it. So there wasn’t product market fit.

This potentially calls for a pivot, sometimes to a totally different market or product.  In their case, they liked the market, but they didn’t throw out the product and go back to square 1.  Instead, they employed tactic # 2.

Follow Net Promoter Score: Tactic # 2

Because their product was “Yelp for Parents,” it was a lot of things to a lot of people.  But was there any one segment getting a lot of value? So much value that they were willing to promote the product?

To evaluate this, they segmented the user base into use cases.

What they found was that people who were using their product to find daycare and preschool were getting a lot value. This was the killer use case, and they learned it from the NPS data.

Scaling-Schmaling: Tactic #3

From here, they collected data manually, collecting data in just a single place — San Francisco. They spent money to do it.

Sara advises that, when you’re identifying product-market fit, don’t worry yet about scaling. Just collect the info, even if it requires disproportionate spending.

This went well. When they launched this in a mothers’ group in San Francisco, people were really excited. So excited, in fact, that they shared it unprompted, generating the organic promotion they were hoping for.

So did they have product market fit?

People Need to Be Paying for It: Mistake #2

The popularity might make it seem that way, but the money said otherwise.  A lot of people love free things, but you can’t run a business purely on free.

So they started to think through who might pay for this service, and why. Would the daycares pay for it? The parents?

They advertised the products for sale and the measured clicks in order to evaluate. Both groups clicked, but the parents clicked a lot more.

So they decided to build a premium membership offering for parents, including payments infrastructure.

People Might Fib about What They’d Pay For: Mistake #3/Money Talks: Tactic #4

Unfortunately, it turned out that parents dropped out during the sign-up process, before giving their money. So they said they would by clicking, but they didn’t back it up.

So rather than polling them about what they would pay for, employ the Tim Ferris tactic of actually taking money. Get them to pre-order or sign up to evaluate wether they’d pay.

Sara then did something manual and easy. She set up square invoicing and tried this out with the daycares. She sold them a path tot the top of the search results, which they did manually.

And providers were absolutely willing to pay.

They’d pay for a month, and then they were more than willing to renew and keep paying to stay at the top of the results. Kids constantly age out of the programs, so the providers constantly need to be discovered.

So Sara added “AND make money” to the definition of product-market fit. You need a good market with a product that can satisfy that market, but you need people willing to pay.

My Takeaway

Toward the end of her talk, Sara said something that resonated with me, personally, as a business owner. She’s about to have a baby and she said that product-market fit, to her, had a trailing indicator of “can I step away from this business for a while, confident that the business would continue to be fine and even to grow.”

Last fall, my wife and I took our first vacation in a long time for, for over 2 weeks. And I had a very similar feeling. It was a testament to the viability of the business and our structure that stepping away for a while didn’t result in disaster or collapse.

If your presence and Herculean effort is a defining requirement of the business, then you have more of a job than a business.

Time Machines, Horror Stories, and the Future of Dev Tools, 1:30 Pacific

Once again, I came in a little late due to this being super far from where I was for the last session.  This talk was by Shea Newton, of ActiveState.

The Hook

But, I got right into the recounting of a horror story. Two people, Perceptive and Dense, are in a car in the woods. Perceptive hears some scratching that Dense dismisses, but they both eventually notice it.

Eventually, they decide to leave. They hear on the radio that there’s an escaped convict in the woods, armed with a hook. When they get home, they find a hook in the car.

But where was this hook? Where did the convict get it? Why leave it?

The Ribbon

Then he talked about another such story: a couple, one of whom always wears a colored ribbon. Of course this comes up from time to time, but the woman who wears it always says “it’s just my thing.” Eventually on her death bed, her partner asks her to remove the ribbon and she agrees, at which point her head tumbles off.

Alright, but what kind of creature is this woman? How did this never come up? Anything that could be done about it?

Are there morals to these stories? If there are, it’s not really clear.

Tales of Developer Horror

Shea does relate them, however, to developer stories of horror.  Here are some:

  1. The text parser that nuked his system.  As he was working on creating a text parser, a library he’d neither heard of nor cared about eventually crashed his system and it just wouldn’t boot back up. He eventually got it sorted, but that ended his enthusiasm for the project.
  2. Python Game Development. He didn’t have experience with this and was running Linux while the product documentation was for Windows, but figured, “how hard can it be?” Pretty hard, it turned out. Even armed with Quora and Stack Overflow, nothing worked, even at the very bottom where the downvoted answers were. After a hard slog to get going, he asked himself what he could learn from this experience, and, like the old tales, there were no clear lessons. Give up sooner?

In none of these stories was the protagonist the problem. The root of the story is that building things is hard. But what can you do with that information?

Perhaps the answer lies in making the process and the build easier. The future of developer tools should really involve getting rid of these types of no-win conundrums and making things easier in the development world.

My Takeaway

I definitely relate to this. I can’t tell you how many late nights I’ve spent, defeated and tired by some set of snarled dependencies and concluding “there’s no real lesson to learn here other than that the universe can be cruel.”

I like the general concept that dev tools companies should look to minimize this dynamic in the world of software developers. I personally feel that there is a tendency in our community, to this day, to assume that RTFM is our responsibility and that the pain of assembly is just inevitable.

Developer experience is something that companies consider all too infrequently.

So I love the idea of someone actively striving to consider and solve this issue related to my experience. Just because I can spend all day figuring out a dependency snarl doesn’t mean that I want to or that I should.

Dunbar, Conway, Mythical Man Month, 1:00 Pacific

Steve Deasy, head of engineering at Atlassian, gave a talk about unleashing the potential of teams.  He mentioned the idea of design patterns, and talked about applying similar concepts to organizations.

For instance, consider the pattern of a team growing. You wind up with drag factors: more planning/less doing, more meetings/less code, slower teams/reduced velocity. How do organizations fix this?

At Atlassian, the idea is what Steve calls “radical autonomy.”  You want teams not to need central decision-making as much. Management can enable this by looking out ahead and seeing what sorts of external dependencies will become issues for the team. Then, you remove those as issues by preparation.

Like J.R.’s talk earlier, Steve talked about the Dunbar number and its importance. Atlassian’s growth has taken them past that number, making scale patterns more important. They’ve solved this by creating logical boundaries, within which smaller teams can have autonomy.

This has allowed them to scale through the limits of the Dunbar number while remaining efficient.

Pattern: Are You Shipping Your Org Chart?

As an introductory example, he asked if you’ve ever been to a website where clicking the “buy” button took you to a page that felt like a completely different experience. And he also cited the thing you hear internally as “why are different teams building the same thing within the company?”

This is, again, Conway’s Law, and it’s a familiar (anti pattern.  But Steve talks about how you can actually turn this into a positive or a proper pattern.

He cited his work at Groupon as an example. As they grew, they were hitting constraints and inefficiency, and they were working to decompose a monolith.  As this happened, the architects gave some thoughts about how to address this.

The team built an architecture that allowed many independent teams to work on front end components.  This worked well because it involved reasoning up-front about where partitions would make sense in the software, and then structuring the org chart accordingly.

Pattern: Project is Running Late So Add Another Person

And now for the Mythical Man Month part of the title. Just about anyone with experience in the industry can relate to this.

Falling behind? Just add more people! It’s the wisdom that gives us the idea that nine women can have a baby in a month if waiting nine months for one to do it is too slow.

Steve talked about the tendency, upon falling behind, to “fire up the spreadsheet” and start talking about resources.

But, as most of us have learned through experience, this doesn’t actually work in practice. As Fred Brooks pointed out, “adding people to an already-late project will make it even later.”

But there’s some nuance here.  Bear in mind, this applies when we’re talking about strangers — about people who are not familiar with the project.

You can get some mileage by bringing in people with relevant expertise or experience. Don’t just throw bodies at a problem — put the right people in the right seats.

My Takeaway

I found the theme of design patterns applied to the organization to be an interesting one.  And I also enjoyed the wisdom of not taking established industry witticisms as complete canon.

We glean interesting insight from ideas like Conway’s Law or the Mythical Man Month, but those aren’t inviolable laws. There are ways that you can get something out of adding people to a late project or shipping your org chart.

So recognizing patterns and their implications as you grow becomes critically important. As someone who owns a rapidly-growing business and is very results/experimentation-driven, this message particularly resonates with me.

Learning Stuff at the Booths, 12:55 Pacific

I’ve been wandering around for the last hour, chatting booth attendees’ ears off, and trying to strike a balance between endless curiosity and not wasting their time, particularly if I don’t have a likely purchase case in the near term future.

Still, it’s tough. I get curious and my mind starts to explode with possibilities.  Here’s the TIL rundown of some of the companies I talked to and had, “hey, that’s a thing!” moments.

  • Corvid, by Wix.  If you’ve always thought of Wix as “that DIY web platform for non-techies,” then, hey, me too. But, apparently, they’re remedying that by starting to create first class front-end development activities.
  • Crafter Software is a company that provides an open source, headless content management system (CMS). Their commercial offering layers support and hosting atop the OSS product.
  • TextControl is a document creation/management platform with roots in the .NET world. Seems like something I might be able to plug into our Hit Subscribe line of business software.
  • I learned about the SWAMP: software assurance marketplace.  As a static analysis aficionado, I love this. It’s a multi-language, multi-tool static analysis marketplace. Upload your code, select the analyzers you want to apply, and let ‘er rip. I do a lot of organizational consulting on the back of static analysis and a tool like this is SUPER handy for me.

Booth Touring, 11:55 Pacific

I’ve been strolling around a little since the last talk ended, checking out the booths. That’s after I retired to the lounge to refill on coffee, anyway.

I think the booth situation is probably familiar to anyone who has attended conferences in the past. I have nothing specific to report of interest on that front.

But I do personally enjoy strolling around and seeing what companies are up to. Historically, I might have enjoyed the swag angle, but my wife and I have been on a minimalist trek over the last few years, so there’s not a lot of appeal to collecting stuff. Nevertheless, I feel like booth strolling is a good opportunity to chip away at the problem of “you don’t know what you don’t know.”

I’m hoping to find a way to relieve pains I didn’t know I had.

Introduction to the Misty Robotics SDK, 10:30 Pacific

I couldn’t miss this talk. 20+ years ago, the main thing that drove me toward a CS degree was the promise of programming robots. And here’s a real, live robot.

This is a demonstration of the Misty Robotics SDK, by Justin Woo.

I wound up coming in a little late, because of difficulty finding the right stages, but I was immediately intrigued. He spun the robot around and showed an Arduino in her backside, and talked about her finger-print sensing capability, making her well suited for security.

He also cited some other good use cases for a small, friendly-demeanor robot:

  • Education, innovation, experimentation
  • Elder care
  • Concierge services at hotels

Now for some live coding!

Demonstrating the API

Justin fired up a browser-based command center, and showed capabilities like changing her LED (in her chest) to different colors and having her play sounds. He also did a detailed walk through of the different sensors that she has that you can use.

From there, he introduced us to the idea that the robot is exposing her API as a REST endpoint. He showed a demo of sending a POST request to change Misty’s color.

(As an aside, Misty is very easy to anthromporphize and, dare I say, cute. She blinks expectantly at you and makes friendly sounds, and it kind of creates the feeling that she’s a smart, friendly pet.)

Justin then demonstrated that they’d integrated with Twilio to rig up something where you can send SMS messages that trigger certain REST endpoints. So he was sending text messages like “get sleepy” and she responded by doing it.

The last thing that he showed involved VS Code. The idea is to allow you to deploy code to Misty even when she’s not connected to the internet.

So he showed some Javascript code in the IDE that he could deploy directly from there to the robot. The code he demonstrated had her recognize a face and respond happily. So he deployed, spun her around, and she recognized him and seemed pleased.

I think that immediately makes Misty superior to my cats.

My Takeaway

I love this so much on a personal level, just because of my lifelong interest in the subject matter. I have no practical application that I can think of for Misty, but it’d sure be fun to have one and program her to truck around my house, smiling and applauding when I do mundane house chores.

Immediately, I looked up her price point, and it’s a non-trivial $3K, so I probably won’t get out my credit card in the immediate future. But I’d sure like to keep my eye on this for when I’m either a lot richer or the entry level price comes down.

But, if you work for, say, an app dev agency and you bring customers in to try to impress them, I think you could do worse than buying a few for your bench devs to play with. Seems like it’d be a great, attention-grabbing way to showcase the team’s ingenuity.

Save Weeks of Coding, 10:00 Pacific

This is a keynote, and I find myself in the large room with loud music and such.  This is a talk by J.R. Jasperson of Twilio.  His job, as chief architect, is to help Twilio scale.

Dunbar’s number: the limit to the number of people with whom one can maintain stable social relationships.

For humans, this number is, apparently, 148 people. And this matters because in context here because it’s a knee-point that creates a problem for agile software development.

Agile is, after all, about “individuals and interactions.”  So this starts to break down.

Conway’s Law: organizations which design systems… are constrained to produce designs which are copies of the communication structure of those organizations.  Or, as J.R. puts it, “you ship your org chart.”

Jasperson’s Law: You can make anything seem authoritative if you label it as a ‘law’ with someone’s name as a prefix.

(As an aside, that’s the best maxim I’ve read in a while)

Anyway, his point with an eponymous law is to suggest that efficiency doesn’t have to erode with growth. Scale doesn’t have to ruin your app dev process.

As an extreme org-scale skeptic, I’m intrigued as to where this will go and hoping to see a satisfying devil’s advocate case.

Efficiency Erodes Growth, Historically

With org scale, you tend to see strategic misalignment and tactical misalighnment, both.

As a vivid example, he showed a picture of a mansion in San Jose.  It’s so big that it even has 2 basements. But it only has one working bathroom (and 12 “decoy” bathrooms).  Sarah Winchester, heir to the Winchester Rifle company, commissioned this mansion.

When she acquired her wealth, she felt burdened by the means of acquisition, so, naturally she went to see a medium. The medium said that she could assuage the situation by continuously building her house.

Apart from being the craziest thing I’ve ever heard, it became a paragon of “confusing activity with progress.”  It’s a vivid example of building without a plan.

J.R.’s point? The solution to this would have been a blueprint.

Blueprints at Twilio

A blueprint is a shared document that captures:

  • Key problem to be solved.
  • Requirements from stakeholders.
  • High level designs focused on things that would be expensive to change later. The idea is to avoid boxing yourself in.

So, why do this?


  • It drives alignment
  • It’s a forcing function to think it through and discuss ideas with others
  • Promotes best ideas and minimizes sprawl and mistakes
  • Removes surface area for misinterpretation
  • More eyes make for better plans.

In practice, blueprinting is a balancing act. Too little planning creates waste in the form of future rework, but too much planning re-creates all of the problems of waterfall software development (and Twilio is an agile shop).  And, along those lines, J.R. stresses that the blueprint is a living document.

Blueprinting Lessons Learned

There were plenty of mistakes along the way, he says, that provided learning opportunities:

  • Use a tool to manage blueprints
  • Find weaknesses with sequence diagrams
  • Remember rejected alternatives (it’s much easier to remember what you did than what you opted not to)
  • Measure and iteratively improve: we want the process to serve us, not the other way around.

My Takeaway:

I was intrigued by this, particularly since Twilio’s group exists in a gap between my experience, which has been, historically:

  1. Working with small organizations that are nowhere near Dunbar’s number and
  2. Consulting at organizations that far surpass Dunbar’s number who are trying to figure out how to be more like (1)

It has been my experience that the road to scaling horribly is paved with “here’s how we scale sensibly,” but there’s admittedly an element of selection bias there. Organizations haven’t historically called me in as a management consultant and paid me to tell me about how great things are going. In other words, I’m almost by definition working with companies that have scaled poorly.

So it’ll be interesting to see how “sensible blueprinting” continues to scale for Twilio in the future. Good development process at scale is a tough nut to crack, so anyone cracking it deserves admiration.

What’s Holding Back Analytics at the Edge, 9:30 Pacific

I came in slightly late because I was wrong about which room the talk was in.  Oops. And being late is worse today, since the sessions are only half an hour.

But here I am now.

The talk is by  Tom Yates of Nubix (I think, anyway — there was a different speaker scheduled, so I did a little on-the-fly detective work on LinkedIn) and he was talking about the idea of “edges” in enterprise architecture:

  • Server edge
  • Device Edge

This talk is mostly about the device edge. For those of you following along at home, “edges” are entry points into enterprise systems. So, the company-issue smart phone, or your own smart phone with a company app.

“Software is eating the world, but only nibbling at the edge.”

Tom added a slide with this quote, and I enjoyed it.

Understandably, there are a lot of challenges when it comes to ovtaining and processing data from and related to these devices.  Latency and processing time are factors.

There are opportunities here, though.

He cited Chevron as a case study. They take in 30 TB of data per day from wells, but they only analyze a small portion of it to help give them intelligence about changing wells and pump rates and whatnot. (I am not an oil man, but I can still imagine the conundrum)

Analytics Gap

One of the big challenges here is that Analytics folks work in Python and R, in cloud-based environments. But the people working directly with field firmware work with C and C++, so there are mundane logistical issues with extracting the data and getting it somewhere that the data folks can analyze it.

There’s also a mindset different with embedded and cloud folks. Embedded developers can brick things, so they’ll proceed more cautiously and update more judiciously, and rightfully so. But this makes it hard to instrument a lot of analytics collection capabilities.

My Takeaway

The entrepreneur in me immediately perceives a large opportunity here. I’ve been a home automation (pre-IoT) aficionado since the early 2000s, and have at least dabbled in all of the concerned techs: embedded systems, IoT, data analysis.

There is a TON of data out there being generated, and there’s a lot of waste happening as it just evaporates. Finding pragmatic ways to capture/analyze that information seems like it will be a prohibitive competitive differentiator in the not too distant future.

In other words, if you’re not using devices at the edge of your infrastructure to create actionable intelligence, you can bet your competitors will.

Hotel Room, 9:25 Pacific

Back at it on day 2!  Yesterday was fun, and I meant to close things out with a summary of the day, but I got tired.  Oh well, c’est la vie.

Had a leisurely catch-up on work this morning in the lounge along with my coffee and got ready for the day. Now, I’m running a little late, so time to hustle down for the talks.

I’m not wall to wall talks today, so I’ll try to share some personal notes/observations from the conference.


Live Blogging Developer Week, 2020, Day 1

Keynote: Launch Darkly, What Developers Need to Know, 6:00 Pacific

Journalist TC Currie interviews Edith Harbaugh, CEO and Cofounder of LaunchDarkly.

Let’s talk feature flags and testing in production!

The recent Iowa Caucus debacle made for a good segue into the subject of production software testing. TC Currie introduced the topic that way and talked about testing in production and feature flags, and then started off the conversation by asking Edith about the same.

Edith paused for a brief nod to Oakland, where LaunchDarkly and this conference are both based, and then talked about something I can easily relate to: long periods of development, followed by long periods of testing, and then deployments. Those were the bad old days. And they featured a lot of misalignment between testing and production realities.

Feature flags combat this.

The simplest concept is that you push code into production guarded by a conditional. You can then, via configuration, turn the bits on or off on the fly without deploying additional code.

This confers the powerful ability to segment users. You could, for instance, turn a new feature on for only 5 people. Or maybe you only want to turn it on for users in a certain state or country.

Testing in Production

This conversation led to a natural question about testing in production.  Obviously, you can’t avoid testing in production, but you also don’t need to do all of your testing in production, either.  What’s the right balance?

Edith actually wrote an article once about killing your staging server, admittedly as a bit of hyperbole. But the idea is that the staging server can provide a false sense of security that nothing is wrong.

Feature flagging helps you manage production complexity. In a world with all kinds of dependent microservices, you need the ability to shut these things on and off at will, which is what feature flags allow. This allows a level of production control that you can’t adequately prepare for ahead of time.

Feature flags provide a failsafe or a killswitch.  Edith refers to this as “minimizing the blast radius.”

Avoiding Deployment Stress

Another interesting benefit of feature flagging is avoiding what Edith calls “push and prayer releases.”  She describes a situation where a West Coast developer, giant cup of coffee in hand, is deploying something at 4:47 AM and hoping for the best.

Feature flags really ramp down the risk in such a situation. If something goes wrong, you don’t do frantic support or rollbacks. Instead, you just turn off the feature.

Setting Up Tests in Production

From here, TC segued to a question about what developers need to know about setting up for testing in production.

Edith answered that, first of all, it’s not an either/or proposition. You should still have your pre-production validations and such. But now you’re building in production safety valves to deploy more confidently.

The idea is that your development process now involves creating failsafes ahead of time, as well as pre-thinking your rollout strategy. So this gives you a more intuitive understanding of options in production, meaning that you can run experiments more easily and with less risk.

Feature Flag Development Tips/Tricks

TC asked an interesting wrap-up question about how developers can implement feature flags successfully in their own applications. Here were Edith’s tips:

  • Think about boundaries. Why are you segmenting your users?
  • A good naming strategy.
  • Have a good strategy for your defaults, agree on it, and document.  Does “false” correspond to “on” or “off”?
  • Have a feature flag removal strategy, since feature flags are technical debt.

And, that was it — an interesting talk about testing in production and how feature flags help with that.

How to Build an Enterprise-Wide API Strategy, 5:00 Pacific

This is a talk by Iddo Gino, CEO of RapidAPI.

Unfortunately, I came in just a little late. I snuck out between sessions to wolf down some food because I was starving, and I missed a little bit, despite my best efforts.

But I’m very interested in this topic, so I hustled down.

APIs: Current State — Lots of Opportunities

When I came in, Iddo was talking about three buckets of API:

  • Private APIs
  • Partner APIs
  • Public APIs

And he was also talking about the lifecycle and proliferation of APIs in general. The gist, as I understand it, is that companies keep these things around and tend to layer them on top of each other, making discoverability of them an increasing challenge.

APIs also have a significant commercial angle. Consider:

  • Expedia
  • eBay
  • Salesforce

These companies are all generating at least 50% of their revenue via their APIs, with Expedia generating 90% this way.  This underscores the importance of the partnership API.

But There Are Also Challenges

We’re also seeing more companies exposing public APIs.  That figure has increased exponentially over the last 15 years.

This happens because it significantly increases developer productivity and accelerates development. Developers can leverage platforms that they don’t have time or resources to develop themselves.

But this creates a lot of challenges.  Integrating APIs is hard. Developers have to learn, in a sense, almost a new language every time they want to use one. It also creates an explosion in runtime dependencies and exposure to risk over which you have no control, including concerns like compliance (e.g. GDPR)

Creating an API Strategy

So how, then, do you create a broad strategy?  There are 5 components to a successful API strategy.

  • Executive support. While developer adoption is critical, executive support is the catalyst.
  • Organization structure. Who approves APIs to be used and published? Who defines API standards? Broad standards, or up to each individual team? Companies need to define who owns what, from an org chart perspective.
  • API platforming. You need a standard place in the organization that developers can go and discover APIs — a marketplace, so to speak. At Rapid, they describe this as the “API Hub.” The idea is to have a standard place for developers to self-service publish and consume.
  • Peripheral tooling. Consider things like API design and documentation.  Developers can use tools like Postman and Swagger to make their lives easier with respect to API creation.  Testing and monitoring are also important peripheral concerns.
  • Education and awareness. It’s important to think of API development as a skill, so having programs in place to provide education around development, testing, etc. of APIs is important.

RapidAPI is a solution that provides API marketplaces. It’s available as a white-label serviced within enterprises, but also serves as a first class way to search for public APIs.

Iddo took us through a demo of RapidAPI, discovering and making use of an API in real time in front of us. He was actually able to generate a language-specific code snippet to get going very quickly with consumption.

My Takeaway:

This one is pretty easy, as far as I’m concerned.

One of the things that I enjoy the least when working on our internal software for Hit Subscribe is consuming external APIs. It’s always an unpleasant hodgepodge of reading documentation, trial and error, debugging authentication issues in Fiddler, etc. It’s hours of my life wasted.

So, anything that makes this easier is a win in my column. The next time I have to consume a new API, I’m going to head here, check out the marketplace, and see what kind of boost I get to my productivity. I see no downside to trying this out.

An Introduction to Microservices with the Serverless Framework, 4:00 Pacific

This is a talk by Fernando Medina Corey, of Serverless. He led into the talk with the caveat that he wouldn’t be able to help us all debug a setup.

I’ll cut him some slack, as long as I’m one of the people he stops the talk to help.

Anyway, he outlined 4 different goals:

  1. Develop/test a microservice
  2. Deploy to AWS
  3. Debug the service
  4. Monitor the service

Let’s do it.  How would this go?  He has material for people afterward.

  1. Create an AWS account
  2. Install the AWS CLI.
  3. Configure the AWS CLI
  4. Install the Serverless Framework: binary installer for Mac/Linux, Chocolatey on Windows, NPM.
  5. Clone the project.

All of this will get to the point of building his toy application, “Serverless Jams.”  I like where this is going.

Serverless Jams allows you to add a phone number, receive a voting code text, and then vote on songs.  The idea is to exercise a number of different APIs.

We’re going to do this with Amazon API gateway, Lambda, a DynamoDB table, and Simple Notification Service.

What does the code look like?

Well, it looks like Python, at least on the backend, with 3 files. The client has Javascript, markup, and YAML.  On the backend:

  • generate_code.ph
  • get_votes.py
  • vote.py

Simple enough.  Now, on the front-end:

  • app.js
  • index.html

And then serverless.yaml, which wires things up from a configuration perspective.  This is where the magic happens, and Fernando walked us through all of the information that needs to go in here to create the right sets of permissions.

I know only enough about this to shoot myself in the foot, but he’s making me feel like I’d be able to handle it in an afternoon.

In furtherance of this, Fernando asked how many lines of code all of this would take, combined with the front and back end?

The answer? 436 lines of code, “with me being a pretty poor front end developer.”

(I’m with you there, Fernando, but I think if we switched to a language without significant whitespace on the backend, we could probably slam all of it into 50 lines, tops)

Deployment Options

With the structure and code squared away, he mentioned 3 potential deployment options.

  1. Local AWS Keys (with 2 different sub-options)
  2. Using the Serverless Dashboard, assuming you’ve created an account.
  3. Serverless CI/CD.  (He won’t touch on this today, but this is a good approach for teams that have established a serverless approach)

Demo Time!

And, with that, he popped open an IDE and started working.  Then it was time to sign up for serverless and get into the dashboard. He did this all pretty quickly, but it honestly looked really straightforward.

After development, configuration and deployment, it was time to test and debug. First issue was that he mistyped his phone number, and the second was that he’s already used his number for another demo. So this led to the actually-even-cooler situation where someone in the audience gave a phone number, received the code, and entered it.

At the end, perhaps not surprisingly, everything worked.  This left a little time for a deeper dive, and the audience voted to take a more detailed look at the front end.

In the end, enthusiastic applause.

My Takeaway:

This was a lot of fun to watch and well-presented. And there’s something infectious about watching a talk that’s basically, “look, this cool thing is totally more doable than you might have thought.”

My takeaway here is that technologies like Lambas make glue code much more approachable.  But that’s been true for a while, and I was aware of that.

But now, looking at Serverless as an orchestration framework, for lack of a better term, I have the sense that I can put together some very well integrated functionality very quickly. I have, perhaps, a false sense of confidence that I could do this in an afternoon.  But in reality, I can’t imagine it’d take terribly longer.

If I put on my “old man programmer” hat, I can recall a time when building an app that sent me authentication codes via text would have required weeks, lots of planning and integration, and much pain. And it’d probably have been expensive.

Now, a lot of cool things are possible.  Look for me to auto-spam you all text messages sometime soon.

Interlude, 3:50 Pacific

One of the things that I’m interested in doing, longer term, is to see whether sponsors, speakers, or conference organizers would be interested in live blogging as a value-add in some capacity.  In order to evaluate that, I’m just running the experiment of doing this, journalism-style.

One thing that I’m learning is that documenting a bunch of talks in a row is kind of tiring, in a weird way. It’s almost like a mental version of the soreness you get from doing a weird form of exercise that stretches a muscle you barely you knew you had.  “I didn’t even know it was possible to be sore there.”

Anyway, this is more mentally tiring than I would have thought.

Building Highly Scalable Applications, 3:00 Pacific

This is a talk by Mark Piller, CEO/founder of Backendless.

We start off here with the premise that scalability is hard and that “let’s throw more servers at it” is a recipe for failure. So it must not be an afterthought; it must be baked in from the beginning.

So think of scalability as a part of every single component of your architecture. And with that in mind, this is going to be a walk through Mark’s experience.

Basic Rules

These are rules that you need to think about before adding additional servers. You need to squeeze everything you can out of existing resources:

  • Avoid blocking execution — you don’t want to create bottlenecks like this.
  • Avoid centralization — in general, centralized information or decision-points scale poorly.
  • Foster easy replication — it should always be easy to add additional servers, whatever kind of servers they are.
  • Stateless programming model — it’s easier to scale up without transient information to track.
  • Pagination — whenever you work with a database, it’ll grow, so a “SELECT * FROM…” approach isn’t going to cut it.
  • Test and validate before committing to any component of your tech stack — load testing is key.

Reference Architecture

I’m not in a position to snap pictures of the slides, but he posted a diagram of their architecture.  Slides will be live in a couple of weeks, so you can check that out when they are.  It’s worth a look.

He talked about how this reference architecture allows for easy horizontal scale.

Brief Summary/Rest of Talk

At this point, I’m going to take a break from faithful and time-consuming note-taking/blogging to take in a bit of the talk.  2.5 consecutive hours of live blogging makes me want to come up for air.

So I’m going to enjoy the talk more passively and document just an overview. Here’s the gist of what he talked about:

  • Database: a lot of good detail on how Backendless approaches database scalability.
  • Redis: they use it for caching, job/task queues, atomic counters and synchronization.
  • File System: they tried a lot of different approaches but settled on GlusterFS.
  • Caching: they prefer Ehcache.
  • Async Job Processing (mass emails, push notifications, etc): they use a Redis queue.
  • Code best practices: avoid using synchronized (Java) and favor the Future API.
  • Kubernetes: lets you scale fast by just adding pods to workers.
  • Monitoring: if you expect failure, you can learn to react to signs of failure and prevent it.

My Takeaway:

In some senses, this was a pearls on swine situation. Meaning, I find the subject of massive scale to be theoretically interesting to consider, but it doesn’t have a lot of application to a guy who earns a living running a content marketing company and occasionally updating our internal line of business software.

So to me, this talk was conceptually interesting, but it also served as a way for me to index technologies that I should check out, should the need arise. In other words, here is a company successfully implementing these principles in the trenches. And Mark shared both which technologies had worked for them, and which hadn’t.

So, if I suddenly found myself, Quantum-Leap-Style, as the CIO of a company looking for scale, I’d have a first place to go to guide my research. I could review the techs that he’d mentioned, along with why some worked and some didn’t, and have good guidance for where to start.

(This talk generated a LOT of questions/discussion, part of which was probably because it was a packed house. But I think this is top of mind for a lot of folks.)

Good Rules for Bad Apps, 2:00 Pacific

This is a talk by Shem Magnezi, engineer at WeWork. It’s a series of rules that you can follow for building apps that suck.

He’s apparently given this talk a lot, and it shows. The slides are fun/engaging.  I’m currently looking at one that has Christian Bale’s batman confronting Heath Ledger’s Joker, and life is good.

He wants to clarify, upfront, that he’s talking about “bad’ apps not in the colloquial sense of “bad as good,” but in the sense of bad, making you miserable.

He also mentioned that he used to talk a lot about how to build good apps, but that it could be more helpful to think in terms of how you ruin apps (and how to avoid that).

So, how does one ruin an application?

1. Ask for As Many Permissions as Possible

Consider a flashlight. One screen. Single button to turn the flashlight on and off.  Simple stuff.

So it needs to be able to take and record video, right? I mean, obviously. Sure, you may not need these permissions now, but who knows what the future holds?

And then, it probably goes without saying that you need to prevent the user from using the app altogether if they don’t agree.

2. Don’t Communicate Anything

Alright, now let’s say that we’re building a reminder application.  What do I have to do over the next week?

When it loads, you see nothing there.  Great!  Nothing to do.  Or, wait… is it just that your reminders haven’t loaded?  Do you not have a connection?

A good way to make a bummer of a user experience is not to communicate anything about its current state or what’s happening behind the scenes.

3. Don’t Save Screen State

Let’s move on to a less trivial app: buying a few books through an eCommerce app.

Add the books to the cart, go to checkout, and then, well, fill in a lot of information. First name, last name… you get the idea. Then you need, of course, your credit card information.

So, as you reach into your wallet for your credit card, the screen rotates accidentally.  Oops, everything is all gone. For some reason, that triggers a refresh of the form.

There’s no better way to create a maddening experience than forcing you to fill all of that out again for no good reason.

4. Don’t Optimize App Size

You’re looking through the app store and you decide to install an app. You go into the store, find some well-reviewed, heavily downloaded app and you get ready to go.

But then, wait a second. Why is this app 70 MB? Yikes!  What if you’re somewhere with a bad signal or you don’t have time to wait.

So you skip it, do something else, and later wonder why the download is so large. Then, maybe, you dig into it and realize that they’re packaging in all kinds of images of different sizes and iterations, perhaps for features that you’re not even going to use.

But then maybe you dig in further and find that there’s a huge file containing all sorts of phone numbers in different countries.  You probably don’t need all of those.

And maybe, this continues with a lot of different examples, all of which combine to add up to a lot of unnecessary data coming along with each download.  This is an experience that Shem has had, and it stopped him from downloading an app when he could have used it, which, obviously, is bad.

(As an aside, this is an interesting analog to the SOLID “interface segregation principle.”)

5. Ignore Material Design Specs

Have you ever seen a beautifully laid out app that had buttons and a general user interface paradigm that was completely new and foreign to you? That’s an interesting conundrum.

You may like it aesthetically, but you’ll have no idea what to do. We’ve come to expect a mobile experience where things are intuitive, lining up with what we’re already familiar with.

So if you want to create a bad app, you can make sure to do stuff the user has no experience with. Bonus points if it’s not even aesthetically pleasing.

6. Create Intro, Overlay, and Hints

Can you picture an app that shows you a LOT of explanations? It requires six pages of onboarding wizards to help you understand what’s happening. And it pops dialogs to help with new features, often which are non-dismissable.

Some of this, I’d imagine, can be useful. But a good way to create a bad app is to bludgeon the user with exposition at every stage of use. If you find yourself needing to do this, your app probably needs to be more intuitive.

7. Ignore Standard Icons and Widgets

The phone providers give you a lot of standard icons and widgets on the screen.  You should probably use those.

But if you want to build a bad app, use your own mysterious ones that nobody understands.

8. Create Your Own Login Screen

You know the feeling of getting a new app and immediately being prompted to generate login credentials? Well, take that, and add to it the feeling of having to hand type in and then remember a new password.

Bummer, right? Wouldn’t it be better to just log in with Google or Facebook or whatever?

When you ask new users to use a login screen that you’ve hand-created, you’re asking them to trust you.  A lot.  You probably shouldn’t do this unless you want to build a bad app.

9. Support the Oldest OS Version

When you’re a mobile developer, you need to look at the different OS versions that you need to support. You can actually go and check out a breakdown of the current user base to see who is using what. You make decisions with this, like the minimum version to support.

Product management, of course, by default, won’t want to lose any users. “We should go back and support all the things!”

But they don’t understand the complexity of checking for those users, adding conditional code, and generally juggling all of these concerns.  You, as the developer, will.

But, if you want to build a bad app, let yourself be overridden on this account. Support all the things.

10. Make Decisions without Data

Imagine an app with a button, like a “donate” (and give us money) button. You probably want as many people clicking on this as possible.

So the product manager wants this button to be green. But then the designer has the idea that the button should really be red. And the developer, well, the developer doesn’t care about color but wants it to be at the place in the screen where it would be the easiest to implement.

What should you do?  Well, if you want to build a bad app, you should probably duke it out, going with the strongest opinion.  But if you don’t want to build a bad app, you should probably rely on real, measurable data to see what works best.

If You Don’t Want to Build a Bad App, What Should You Do Instead?

So, flipping out of fun sarcasm mode, what’s a quick list of things you should do instead?  Here is Shem’s guidance there, in a nutshell:

  1. Permissions?  Instead, ask for only what you need
  2. Communications? Instead, notify about loading and empty state.
  3. Lose screen state? Instead, save screen state.
  4. Large app size? Instead, use vectors and modules.
  5. Unknown UX? Instead, use material design whenever possible.
  6. Have introductory exposition? Instead, let them explore and offer hints in context.
  7. Mysterious icons? Instead, use predefined icons.
  8. Roll your own login screen? Instead, use single signon.
  9. Support every framework version? Instead, know your users and strategically target them.
  10. Make decisions with no real data? Instead, measure data and use A/B testing.

He has more rules, which you can find at his site.

My Takeaway

I really enjoyed this talk a lot. He’s a good speaker and it’s an engaging and relatable premise, but my enjoyment goes deeper than that.

Like anyone, I spend a lot of my time using my phone for very tactical, often time-sensitive purposes. I’m trying to catch a ride or look up whether my plane is late or whatever. So my phone is present for some of the tensest, most annoying moments of my life.

And it is these moments that provide some of the most intense technical frustration. Waiting for something to take forever to load when you’re already late, or getting bonked with some kind of cryptic error message that won’t let you proceed to the next screen.

And when you’re confronted with these moments, nobody around you cares. They’re not interested in the temper tantrum that you’re bottling up.

So for all of those angry, frustrated moments when I had no one to confide in, I feel vindication from this talk. Shem captured a bunch of frustrating, relatable moments and made them into actionable lessons, in a funny way. It’s nice to know that I’m not alone in my intense frustration with mind-boggling UX choices.

A 3 Part Scoring Formula for Promoting Reliable Code, 1:00 Pacific

This is a talk by Chen Harel, co-founder of OverOps.

He wants to suggest a few quality gates that those of us in attendance can take back to our engineering groups.  This is ultimately about preventing severity one production defects.

Consider that speed and stability are naturally at odds when it comes to software development and deployment.  With an emphasis on time to market, the cost of poorly written software is actually growing, notwithstanding agile methodologies and increased awareness of the importance of software quality.

And here’s a contextual linchpin for the talk:

“Speed is important. Quality is fundamental.”

So how do organizations address this today?  Here are some ways:

  1. DevOps: “you build it, you run it,” to increase accountability.
  2. A “shift left” approach to software quality — bake quality concerns into the process earlier.

But how do we measure quality?

Measuring Code Quality

Measuring software quality is all about good data.  And we tend not to have that data readily at our disposal as much as we might want.

Here are some conventional sources of this type of data:

  • Static code analysis
  • Code coverage
  • Log files

But what about the following as a new approach?

  • New errors
  • Increasing errors
  • Slowdowns

Using these metrics, the OverOps team was able to create a composite means of scoring code quality.

A Scoring Process

So, let’s look at the reliability score. Releases are scored for stability and safety based these measures.  And it requires the following activities for data gathering:

  • Detect all errors and slowdowns
  • Classify each detected event
  • Prioritize them by severity
  • Score the build
  • Block builds that score too low
  • And then, in retrospect, visualize the data

Toward this, let’s consider some data detection methods.  Manually, they have log files and metrics libraries.  But they can automatically detect issues using APM, log aggregators, and error tracking.

(Halfway through the talk, and I’m really enjoying this. As both a techie and business builder, gathering actionable data is a constant focus for me these days. I love the idea of measuring code quality both with leading and lagging indicators, and feeding both into a feedback loop.)


Now, drilling a little further into step (2) above, classification.  Ask questions like:

  • Is the dev group responsible?
  • What’s the type or potential impact?
  • What dependent services are affected?
  • What is the volume and rate of this issue — how often did it happen or not happen?


When it comes to prioritizing, we can think of new errors as ones that have never been observed. And a new error is severe if it’s

  • Uncaught
  • A critical exception
  • Volume/rate exceeds a threshold

There’s also the idea of increasing errors that can help determine severity. An error can become severe if its rate or volume is increasing past a threshold.

And you think about errors in terms of seasonality as well to mitigate this concern a bit. That is, do you have cyclical error rates, depending on time of day or week, or other cyclical factors? If so, you want to account for that to make sure temporary rate increases aren’t expected as the normal course of business.

And, finally, you can think of prioritizing slowdowns. Slowdowns mean response time starts to take longer, and a slowdown becomes severe based on the number of standard deviations it is away from normal operation.

Scoring Formula

So based on classification and priority, the OverOps team starts to assign points to errors that occur. They took a look at severity, as measured by things like “did this get us up in the middle of the night,” and adjusted scoring weights accordingly until they fit known data.

This then provides the basis for future prediction and a reliable scoring mechanism.

Now, assume all of this is in place. You can automate the gathering of this type of data and generate scores right from within your CI/CD setup, using them as a quality gate.

A Code Quality Report

Having this integrated into your build not only allows you to reject builds that don’t pass the quality gate. You can also generate some nice reporting.

Have a readout for why a given build failed, and have general reporting on the measured quality of each build that you do.

My Takeaway:

I’ve spent a lot of time in my career on static code analysis, which I find to be a fascinating topic. It promises to be a compile-time, leading indicator of code quality, and, in some ways, it does this quite well. But the weakness here is that it’s never really tied reliably into actual runtime behaviors.

In a sense, a lot of static analysis involves predicting the future. “Methods this complex will probably result in bugs” or “you might have exposed yourself to a SQL injection.”

But the loop never gets closed. Does this code wind up causing problems?

I love this approach because it starts to close the loop. By all means, keep doing static analysis. But also run experiments and measure what’s actually happening when you deploy code and feed that information back into how you work.

Joy! 12:50 Pacific

As I come up on my 40th birthday, I think the appropriate headline for this is “Old Man Figures Out Smart Phone.”

Sched support was very responsive and helpful. From what I can tell, the email address it registered me with is the one associated with my Facebook account. So “logging in” via Facebook apparently meant using Facebook’s email address on file as my Sched username.

Live and learn, I guess.

Time to grab a soda from the lounge and go listen to a talk about a scoring formula for reliable code. As anyone who follows my stuff on static analysis will know, this one is of particular interest to me.

Troubleshooting, 12:35 Pacific

So far, so, well, not the best. I signed up for my schedule of events a while back through some kind of (at the time) seamless combination of Eventbrite, the conference itself, and something called Sched.  I did this all with my Facebook account.

Apparently, the Facebook auth option went away, though. So now I’m left with no way to log in and show the ushers that I’ve reserved a seat.

C’est la vie. I think I probably picked a relative corner case way to sign up  Plus, I can’t imagine the complexity of coordinating all of this stuff, logistically, across several apps and authentication protocols.

I’d be happier if I could attend the talks, but the people doing support are being really responsive, so hopefully it’ll all be sorted soon.

Pre-Conference, 11:30 Pacific

Those of you who follow my blog might be disappointed to know that I put no effort into second passes on my writing. What you get is just a dump of whatever pops into my head, typos and all.

In a lot of content situations, you might consider this a liability. Well, today, I’m going to strive to make it in an asset.  I’m going to live blog this conference in a style that will probably read like most of my blog posts.

This week, Amanda and I are in San Francisco/Oakland to meet with clients and so that I can attend DeveloperWeek.  I’m hoping to enjoy some talks, meet some folks, find potential clients, and de-hermit a little bit.

I’m an introvert that lives a reclusive life in the woods.  I own a remote-only business.

And now, here I am, attending a professional conference for the first time in years, ready to have a whole lot of human interaction (for some definition of “ready”).  And I’m going to live-blog the whole thing, gonzo style (but with less chemical alteration than Hunter Thompson).

So stay tuned to the blog for updates throughout the day for the next three days. I’ll blog a lot about the talks, but more generally what I’m up to.  And I invite you to weigh in about which talks I should attend or what you’d like to hear about.  Comments, tweets, etc. all welcome.

If you’re curious about the location and the hotel, I’d like to report that I’m enjoying both. I have enough status left over with Marriott from my management consulting days that they gave us a room with a great view of Oakland.

So life is good!  Ready to hear about the latest and greatest in tech.