Saturday, July 13, 2013

Interview with Daniel Lee

Theis really close and thanks to I have interviewed another speaker. He is DANIEL LEE, Core-Platform developer at . In my opinion, his talk will be a must see; if you have any doubts please read the following interview and they will surely disappear.



MIRKO: HI DANIEL, THANKS FOR BEING AVAILABLE FOR THIS INTERVIEW. PLEASE INTRODUCE YOURSELF TO OUR READERS.




DANIEL: I have been a developer at Klarna for a bit over two years now. I grew up in Los Angeles, and did my studies at Cornell University and Carnegie Mellon University, where I was bitten by the functional programming bug and got to study with some really smart people with colorful personalities. After leaving graduate school, I spent some time in the service industry bartending in failing restaurants with pretty decent food. I attribute my sense of software development taste to the schooling and my work ethic to the bartending. In 2011, I moved to Sweden for a fantastic opportunity at Klarna and it's been a great adventure ever since. I am on twitter.



MIRKO: YOUR TALK AT THE ERLANG USER CONFERENCE 2013 IS REALLY INTERESTING. YOU CALLED IT "CONTINUOUS MIGRATION: RE-IMPLEMENTING THE PURCHASE TAKING CAPABILITY OF A 24/7 FINANCIAL SYSTEM". SO YOU ARE BASICALLY RE-FACTORING THE WHOLE "PURCHASE TAKING CAPABILITY" WITH NEW ERLANG CODE, FOLLOWING BEST PRACTICES AND STANDARDS FROM THE ERLANG COMMUNITY. WHY HAS THIS BECOME A PRIORITY WITHIN KLARNA? I REMEMBER A TALK FROM DAVID CRAELIUS BACK IN 2011 ABOUT ERLANG IN KLARNA.HAVE YOU FOUND THAT THE PREVIOUS MONOLITIC ERLANG SYSTEM WAS VIEWED LIKE TECHNICAL DEBT?



DANIEL: The "Purchase Taking Capability" is the most important function of Klarna's soft Real-Time Domain. Klarna's business model is all about increasing conversions for our merchants, so downtime there means lost business for Klarna, lost purchases for our merchants, and poor user experience for the end-consumers. Historically, availability of purchase taking capability was dependent on the availability of the legacy monolithic system. The goal of this project is to decouple that capability from the legacy monolithic system to provide a more reliable end-user experience, as well as better scalability in terms of increased purchase taking capacity.



There's a line from an old Fleetwood Song "Landslide" (or you can for the Glee cover): "I've been scared of changin', because I've built my life around you" Legacy code (and the legacy business logic it implements) is a pretty painful and risky thing to change at a company like Klarna, because you know that if no customers are complaining about it, you are quite possibly making a lot of money from it. Our customers would prefer to integrate with us once at the beginning of our relationship, and never have to mess with that ever again. Breaking a legacy integration is almost always a painful and expensive customer relations problem.



Much of Klarna's meteoric growth is due to herculean pushes to get new features to market quickly. The shortcuts taken for these features, or even design decisions that were entirely reasonable at the time, are much of the technical debt. Trying not to break features no one really understands anymore can really slow you down. Developers would love to throw much of this away and start fresh, but much care must be taken to minimize how this affects the merchant integrations and customer experience.



MIRKO: EXTRACTING THE "PURCHASE TAKING CAPABILITY" IS ONLY THE FIRST STEP IN A BIGGER RE-FACTORING PROJECT OR IS ONLY A SINGLE PROJECT? ARE YOU LOOKING FOR HAVING A SYSTEM OF SMALL SYSTEMS WHICH TALK TO EACH OTHER BUT ARE MORE MAINTAINABLE AND TESTABLE?



DANIEL: Klarna's legacy monolithic system worked surprisingly well when there were a small number of developers, most of whom were well-salted Erlang gurus. A single point of delivery will not scale for a development organization with dozens of product teams. It's also a pain to operate. Klarna is currently undergoing big initiatives to divide up business functionality into independent services with much more purity of purpose. This re-factoring has produced a number of independent utilities, frameworks and business libraries that are shared between multiple systems.goes into the ecosystem of new independent systems much more thoroughly.



MIRKO: I LIKE THIS INTERVIEW BECAUSE I CAN WRITE SOME VERY TECHNICAL QUESTIONS.WHAT ARE THE BEST PRACTICES TO FIGHT THE "LEGACY CODE"? I KNOW THAT CHANGING A CODE THAT IS MAKING MONEY WHILE WE ARE WRITING THIS INTERVIEW IS NOT AS EASY AS IT SEEMS. IS IT A SIMPLE MIGRATION FROM SYSTEM A TO SYSTEM B IN THE HOPE THAT EVERYTHING WILL BE OK OR IT IS A STEP BY STEP MIGRATION TO A SYSTEM WHICH IS GROWING ITS FUNCTIONALITY DAY BY DAY?



DANIEL: Good taste and common sense. I personally prefer a "Ship of Theseus" approach, where big re-factorings are shipped incrementally until you have something completely new that essentially does the same thing. Each individual change can be understood much better, but the aggregate of all the changes terms a hairy mess into something remotely attractive. Shipping a huge diff in a critical part of the system is extremely dangerous. Having been involved in both sorts of changes and seen many, dropping a huge diff at once is generally a sign that something broke down somewhere in the planning or development process. And the releases are terrifying. No thanks.



We've also had much success in developing frameworks that capture the structure of the control flow of the code, and pushing the business logic into callbacks passed to the frameworks. is a great example of a framework that was used to re-implement our XMLRPC-API into something sensible. It went from an 8000+ line of code module, to a family of callbacks describing input types and API methods. Many of the callbacks still retain legacy compatibility quirks, but things are at least organized in a much nicer way.



A key safety point in correctly doing this sort of migration is to avoid duplicate copies of the same logic. This involves moving shared logic from the version control of the legacy system to an independent sub-component shared between the legacy and new systems. At Klarna, this involves git, with the legacy system maintaining dependencies with git submodules and the new system using rebar for dependency management. There is some pain in managing change in such a situation, but there are also nice wins in new features and fixes going into both systems "for free".



MIRKO: WHAT IS YOUR OWN PERSONAL DEVELOPMENT CYCLE? YOU FOLLOW TDD, SO YOU WRITE AN ACCEPTANCE TEST FOR EACH NEW FEATURE AND THEN UNIT TESTS UNTIL YOU GET THE GREEN LIGHT OR YOU ARE LESS INCLINED TO TESTING? IF YOU TEST YOUR ERLANG CODE CAN YOU TELL US SOMETHING ABOUT TOOLS AND YOUR EXPERIENCES?



DANIEL: As a Core - Platform developer, my personal situation is a bit unique, in that I mostly work on logging and monitoring issues, build/release related issues, developing libraries and frameworks, and the occasional business logic re-factoring. My acceptance criteria are typically "make my fellow developers happy", "make my operators happy" or "do same thing in a more re-usable way without breaking existing behaviour". In the last case, there are often existing regression tests in place. In most others, I prefer unit testing the new functionality. I also believe that there are many cases where problems can be sufficiently generalized into frameworks or libraries with a purity of purpose and a manifestly correct implementation. Such solutions work perfectly*, and bugs arise from using them wrong .



We use both eunit and common test at Klarna, with a smattering of proper.



We are currently working on an integration testing system that does acceptance testing between the new and legacy systems. This has a number of interesting challenges related to spinning up the environment and managing the different systems, but it's still in relative infancy so it's too early to say too much that is insightful about what we've learned from doing this yet. Your friend Roberto Aloi from ESL actually started consulting with us in May and has been given a lot of responsibility with this test system. Really excited about this!



MIRKO: YOU RELEASE CODE ONCE EVERY WEEK (I HOPE I AM NOT WRONG IN MY INTERPRETING OF THE EUC TALK DESCRIPTION). DO YOU THINK IT IS POSSIBLE TO REACH A PSEUDO "CONTINUOUS DEPLOYMENT" CYCLE? I AM ALWAYS SCARED OF CODE THAT GOES ON PRODUCTION 5 DAYS AFTER I HAVE FINISHED WRITING IT. WHAT DO YOU THINK ABOUT ALL DEVELOPERS GOING TO PRODUCTION AFTER AN INTERNAL APPROVAL? DO YOU THINK IT IS POSSIBLE IN KLARNA?



DANIEL: Because Klarna sells a service, and not the software itself, a release means an immediate change in the code paths executed in customer interactions.



Klarna's legacy system releases every week. The new purchase taking system is currently in a Beta, where all XMLRPC-API traffic goes through the new systems. Functionality we can handle is taken on the new system, and that which we cannot gets reverse proxied to the legacy system. This new system is not currently on any fixed release schedule and tends to be much more often than once a week. The restaurant guy in me thinks of finished, but unshipped code like food dying under a heat lamp waiting to be served. A waste of money, a displeasing delay in delivering value and eventually a health risk.



The real bottleneck in how often a service can be upgraded is the amount of overhead required to prepare and deploy a new release. Some of this overhead is computational (running regression tests) but the most expensive part is human: the time required by test engineers, release managers, and operators to approve and deploy an upgrade. Because of the large amount of business functionality served by the legacy system and the dozens of developers shipping to it, the fact that it releases once a week is a testament to a good deal of problem solving and the talent of several thick-skinned managers.



The new system currently has much less overhead and shorter test times, so releasing even multiple times a day is possible. Since currently this is still very much owned by developers, as opposed to operators or administrators, we're relying on our own laziness to keep the human costs of upgrading code low with aggressive automation of the mechanizable bits. The kid in me who grew up watching "The Terminator" and "The Matrix" thinks the GO/No GO decision should still be made by a human, however.



In both systems, the expectation is that once a developer has written a change, the change passes the regression suites, and the diff has been signed off by a technical reviewer, it is ready to be merged in (and ship very soon). Emergency fixes are an unavoidable regular occurrence we are constantly trying to minimize, but this level of trust is an important part of our culture and necessary for the pace at which we want to deliver code changes. The literal definition of agility is the ability to change direction quickly. The most agile business is literally the one that is capable of changing the fastest.



MIRKO: I AM CURRENTLY WORKING IN A MOBILE PAYMENT COMPANY. I KNOW THAT THEY ARE TWO DIFFERENT DOMAINS BUT WE HAVE TO DEAL WITH PURCHASES, SUBSCRIPTIONS AND A LARGE NUMBER OF PAYMENT METHODS.WHAT ARE THE MOST IMPORTANT ERLANG FEATURES FROM YOUR POINT OF VIEW AND WHAT KIND OF ADVANTAGES ARE THEY GIVING TO KLARNA?WHAT ARE THE MAIN DIFFERENCES IF FOR EXAMPLE THE WHOLE KLARNA SYSTEM WAS WRITTEN IN A "MORE MAINSTREAM" LANGUAGE SUCH AS JAVA OR PHP?



DANIEL: The only other languages where I've done significant amounts of coding are Standard ML and a charming dependently-typed logic programming language called Twelf, so I can't make super fair comparisons with Java or PHP. The transparency of data in Erlang is rather convenient for debugging, but makes enforcing any sort of abstractions extremely difficult and you too often end up tragically married to your original representation. On the flip-side, process isolation in Erlang gives you a really lovely, easy to understand memory model whereas in more imperative, shared-memory language like Java that goes out the window immediately.



Due to its fault tolerance and concurrency, Erlang is a great solution for implementing the web-facing high-availability parts of a company's distributed system. If I have free reign and could start from scratch, I would use Erlang to terminate my web traffic and delegate the interesting business bits to something with a rich type system and relatively lively user-base like OCaml or Haskell. I'm sure recruiting and upper management would be thrilled at the prospect of chasing after an even more esoteric talent pool, but there are some very successful financial companies using statically typed functional programming languages out there. I'm a huge fan of compile-time checks for correctness, so for the important logical bits I think the stronger type system you have the more reliable your output will be.



MIRKO: IN YOUR TALK DESCRIPTION ON THE ERLANG USER CONFERENCE SITE I READ: "CORE CODE GRUNT AT KLARNA". TRUST ME IT IS THE FIRST TIME I SEE THE WORD "GRUNT" APPLIED TO A DEVELOPER. WHY DO YOU FEEL LIKE A GRUNT?



DANIEL: I am pretty awesome and have a huge ego about it. Thinking of myself as a Code Grunt reminds me that although having good ideas is fantastic, the primary function of a software engineer is to implement solutions and ship them so business value is delivered to his company on a consistent basis. People with fancier titles typically have a lot less fun than me.



MIRKO: IN YOUR PREVIOUS LIFE YOU HAVE BEEN A TYPE THEORY RESEARCHER. WHAT DO YOU THINK ABOUT THE ERLANG TYPE SYSTEM? SOMETIMES I READ DEBATES ABOUT IT. A VIEW FROM AN EX-RESEARCHER IS ALWAYS APPRECIATED.



DANIEL: Erlang values occupy a very reasonable set of types, not too different from core ML. The language itself does very little to leverage those types at compile time, so in practice it is rather dynamic like a Lisp or Scheme. I think some kind of ML built on the beam with Erlang-style message passing could be rather exciting. My background makes me want to give such a thing a formally defined semantics, but then many implementation details of the beam make it rather complicated and my brain explodes.



I am a rather big fan of ML Functors, and Erlang parameterized modules are rather similar in that both are functions from "things" to modules. We've used parameterized modules in a number of places to avoid duplicate code, write wrappers that preserve separations of concerns, and occasionally create some boilerplate than is shinier than your average boilerplate. The strong type systems in MLs keep you in line when using functors, but Erlang checks none of that with parameterized modules so it is much like riding a motorcycle without a helmet. Really fast and more fun because you feel like a really bad, bad man, but when you screw up the mess is pretty unrecognizable. My team's usage of parameterized modules tends to raise eyebrows with more traditional Erlang developers, feature father Richard Carlsson occasionally among them. Please don't bring up R16.



MIRKO: WHAT IS YOUR FAVOURITE ERLANG DEVELOPMENT SET UP? EDITOR, TOOLS AND SO ON? I AM PERFORMING A LITTLE SURVEY ABOUT THIS ARGUMENT.



DANIEL: I'm a rather unsavvy minimalist when it comes to my development setup. emacs in the console + erlang mode. I prefer to use vanilla defaults in my configuration as much as possible, so that I have minimal expectations if I am thrust into a fresh/unfamiliar environment.



Many people at Klarna who prefer a more IDE-like interaction with emacs + erlang use , developed by my teammate Thomas J rvstrand.
Full Post

No comments:

Post a Comment