<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Marks Wandering Thoughts]]></title><description><![CDATA[Making the internet a beautiful place]]></description><link>https://markswanderingthoughts.nl/</link><generator>Ghost 3.32</generator><lastBuildDate>Sat, 18 Apr 2026 12:51:25 GMT</lastBuildDate><atom:link href="https://markswanderingthoughts.nl/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Go from Sketch to Prototype in One Hour with Lovable AI]]></title><description><![CDATA[<p>Today I put <a href="https://lovable.dev/invite/270804dc-ee8f-4dbe-9bca-414c7e20399e">Lovable</a> to the test to see just how fast I could go from a rough sketch to a functioning landing page: fully styled, responsive, and hosted on my own custom domain. Spoiler: it took less than an hour.</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2025/07/Screenshot-2025-07-25-at-14.37.24.png" class="kg-image" alt srcset="https://markswanderingthoughts.nl/content/images/size/w600/2025/07/Screenshot-2025-07-25-at-14.37.24.png 600w, https://markswanderingthoughts.nl/content/images/2025/07/Screenshot-2025-07-25-at-14.37.24.png 893w" sizes="(min-width: 720px) 720px"></figure><h2 id="-first-prompt"><strong>🖍️</strong> First Prompt</h2><p>I started with a simple wireframe sketch,</p>]]></description><link>https://markswanderingthoughts.nl/go-from-sketch-to-prototype-in-one-hour-with-lovable-ai/</link><guid isPermaLink="false">688378c3de301d07e503b6b2</guid><category><![CDATA[ai]]></category><category><![CDATA[llm]]></category><category><![CDATA[prototyping]]></category><dc:creator><![CDATA[Mark van Straten]]></dc:creator><pubDate>Fri, 25 Jul 2025 12:39:11 GMT</pubDate><media:content url="https://markswanderingthoughts.nl/content/images/2025/07/Screenshot-2025-07-25-at-14.38.31.png" medium="image"/><content:encoded><![CDATA[<img src="https://markswanderingthoughts.nl/content/images/2025/07/Screenshot-2025-07-25-at-14.38.31.png" alt="Go from Sketch to Prototype in One Hour with Lovable AI"><p>Today I put <a href="https://lovable.dev/invite/270804dc-ee8f-4dbe-9bca-414c7e20399e">Lovable</a> to the test to see just how fast I could go from a rough sketch to a functioning landing page: fully styled, responsive, and hosted on my own custom domain. Spoiler: it took less than an hour.</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2025/07/Screenshot-2025-07-25-at-14.37.24.png" class="kg-image" alt="Go from Sketch to Prototype in One Hour with Lovable AI" srcset="https://markswanderingthoughts.nl/content/images/size/w600/2025/07/Screenshot-2025-07-25-at-14.37.24.png 600w, https://markswanderingthoughts.nl/content/images/2025/07/Screenshot-2025-07-25-at-14.37.24.png 893w" sizes="(min-width: 720px) 720px"></figure><h2 id="-first-prompt"><strong>🖍️</strong> First Prompt</h2><p>I started with a simple wireframe sketch, a few reference images for the visual style, and some assets I wanted to include. My first prompt was pretty direct:</p><pre><code>Use this attached image 'PXL_20250725_080435921.MP.jpg' as a rough wireframe. Keep the style minimalistic. Have the language used on the page be in Dutch.

Have the landing page contain the following sections:
- Welkom bij Bijenherder Mark (introduction)
- Over mijn imkerij (Since when I have been a beekeeper, where I have my bees and my philosophy how I treat the bees)

Use the following images as assets on the page:
- PXL_20240509_125708558.MP.jpg - image of queen bee, can be used as image at the top of the page
- IMG_20200730_114701.jpg - bonus image of beecomb
</code></pre><p>I uploaded the files, hit "Generate," and within minutes Lovable delivered a first version. Not bad at all!</p><h2 id="-iterating">🔁 Iterating</h2><p>Once the initial version is ready, Lovable makes it easy to iterate. You can invoke additional prompts or make local edits by just clicking on a part of the design and typing something like:</p><blockquote>“Vertically align the text center, make the font bigger, and use Arial for the heading.”</blockquote><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2025/07/Screenshot-2025-07-25-at-14.24.39.png" class="kg-image" alt="Go from Sketch to Prototype in One Hour with Lovable AI" srcset="https://markswanderingthoughts.nl/content/images/size/w600/2025/07/Screenshot-2025-07-25-at-14.24.39.png 600w, https://markswanderingthoughts.nl/content/images/2025/07/Screenshot-2025-07-25-at-14.24.39.png 699w"></figure><p>Changes are rendered instantly, and you can either tweak further or roll back to a previous version. It’s fast, intuitive, and surprisingly fun.</p><h2 id="-publishing">🚀 Publishing</h2><p>Lovable’s built-in publishing flow lets you deploy your project directly to their infrastructure. If you have a bit of engineering experience, you can also:</p><ul><li>Connect the source to a GitHub repo - it auto-syncs the solution</li><li><code>git clone</code> the repo locally and run <code>npm build</code> to generate a Vite-based distribution</li><li>Deploy it where you like: AWS S3 + Route53, Netlify, Vercel, you name it</li></ul><p>Super straightforward.</p><h2 id="-something-about-security">🛡 Something about security</h2><p>For me this was out of scope since I was creating a static page without user interaction. The amount of attack vectors for such a solution are very limited. When you are creating a more expansive, complicated prototype you can also do an audit using Lovable to identify Security caveats to take care of.️</p><h2 id="-the-verdict">✅ The Verdict</h2><p>With just a sketch, a few images, and a clear idea, I went from concept to live prototype in under an hour. For non-engineers, Lovable even handles hosting and custom domains out of the box. The experience was smooth, efficient, and genuinely enjoyable.</p><p>Sure, it might not be the tool for highly complex or bespoke designs. But for quick idea-to-prototype workflows, it’s a fantastic addition to the toolkit.</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2025/07/Screenshot-2025-07-25-at-14.37.10.png" class="kg-image" alt="Go from Sketch to Prototype in One Hour with Lovable AI" srcset="https://markswanderingthoughts.nl/content/images/size/w600/2025/07/Screenshot-2025-07-25-at-14.37.10.png 600w, https://markswanderingthoughts.nl/content/images/2025/07/Screenshot-2025-07-25-at-14.37.10.png 890w" sizes="(min-width: 720px) 720px"></figure><p>👉 Try it yourself on <a href="https://lovable.dev/invite/270804dc-ee8f-4dbe-9bca-414c7e20399e">Lovable.dev</a>!</p>]]></content:encoded></item><item><title><![CDATA[Tagging in Gitlab CI Pipeline using Deploy Keys]]></title><description><![CDATA[When building your application in a Gitlab pipeline you often want to push back changes to your repository. You can use deploy keys to accomplish this. Read on to learn how to use these.]]></description><link>https://markswanderingthoughts.nl/tagging-in-gitlab-ci-pipeline-using-deploy-keys/</link><guid isPermaLink="false">601c4f82ef1fc607d6cf8143</guid><category><![CDATA[gitlab]]></category><category><![CDATA[cicd]]></category><category><![CDATA[automation]]></category><category><![CDATA[ssh]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Continuous Integration]]></category><dc:creator><![CDATA[Mark van Straten]]></dc:creator><pubDate>Thu, 04 Feb 2021 19:57:36 GMT</pubDate><media:content url="https://markswanderingthoughts.nl/content/images/2021/02/Screenshot-2021-02-04-at-20.55.15.png" medium="image"/><content:encoded><![CDATA[<img src="https://markswanderingthoughts.nl/content/images/2021/02/Screenshot-2021-02-04-at-20.55.15.png" alt="Tagging in Gitlab CI Pipeline using Deploy Keys"><p>Often when you are compiling your sourcecode in Gitlab you will want to push some changes back to the repository, for instance a created <code>commit</code> or <code>tag</code> . By default Gitlab does not allow the gitlab runner to write back to the repository. Adding that behaviour has been on the <a href="https://gitlab.com/gitlab-org/gitlab-foss/-/issues/18106">backlog</a> of <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/35067">Gitlab</a> for a <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/20416">long time</a>. But there exists a feature called <a href="https://docs.gitlab.com/ee/user/project/deploy_keys/">Deploy Keys</a> in gitlab which we can leverage to fix this 🎉</p><p>This process involves some shuffling with sensitive files so be mindful where you store these. We are going to generate a private SSH key (+public key), upload that pair into gitlab and use it from within our pipeline to have a trust between the cicd pipeline to be able to push code.</p><h2 id="one-time-setup-generate-ssh-key">One time setup: generate SSH key</h2><p>We assume that the following directory exists and is empty: <code>~/development/gitlab-deploy-token/</code> (MacOS).</p><p>To generate the public/private key pair we use <code>ssh-keygen</code> with the following command:</p><p><code>ssh-keygen -t rsa -b 4096 -C "your@email.address" -f ~/development/gitlab-deploy-token/id_rsa</code></p><p>This will give you a <code>id_rsa</code> file (private key!) and <code>id_rsa.pub</code> which is the public key. the <code>your@emaill.address</code> is only a comment added to the public key, tweak as you desire.</p><h2 id="upload-private-ssh-key-to-gitlab">Upload private SSH key to Gitlab</h2><p>To have the Private SSH key available in your build pipeline we need to add it as a CI/CD variable. You can add it to every project as you require it but to centralise your secrets its better to set them at the root level of your organisation/team/group/...</p><p>Because you want to keep your secrets secure it is also wise to think about the layering of permissions that you utilise within your organisation:</p><ul><li>Variables can only be updated or viewed by project members with <code>maintainer</code> permissions (source: <a href="https://docs.gitlab.com/ee/user/permissions.html#project-members-permissions">Gitlab</a>).</li></ul><p>To upload the SSH key as CI / CD variable:</p><ul><li>Navigate to <code>Settings → CI / CD → Variables</code></li><li>Add key with name <code>SSH_PRIVATE_KEY_TOOLKIT</code> and paste the contents of your private key as the value. (trick: <code>cat id_rsa | pbcopy</code> to copy the file contents directly to your clipboard on MacOS)</li></ul><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2021/02/Screenshot-2021-02-04-at-19.52.10.png" class="kg-image" alt="Tagging in Gitlab CI Pipeline using Deploy Keys" srcset="https://markswanderingthoughts.nl/content/images/size/w600/2021/02/Screenshot-2021-02-04-at-19.52.10.png 600w, https://markswanderingthoughts.nl/content/images/size/w1000/2021/02/Screenshot-2021-02-04-at-19.52.10.png 1000w, https://markswanderingthoughts.nl/content/images/2021/02/Screenshot-2021-02-04-at-19.52.10.png 1101w" sizes="(min-width: 720px) 720px"></figure><p>All projects underneath the place where you have declared the variable <code>SSH_PRIVATE_KEY_TOOLKIT</code> will now have access to this variable during the build pipeline:</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2021/02/Screenshot-2021-02-04-at-20.01.29.png" class="kg-image" alt="Tagging in Gitlab CI Pipeline using Deploy Keys"></figure><h2 id="create-deploy-token-in-your-project">Create Deploy Token in your project</h2><p>To start trusting the uploaded SSH key we need to upload its public key as 'Deploy Key' into your project.</p><ul><li>Navigate to <code>Settings → Repository → Deploy Keys</code> within your project</li><li>Add a new Deploy Key and add the public key as contents (<code>cat id_rsa.pub | pbcopy</code>)</li></ul><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2021/02/Screenshot-2021-02-04-at-19.55.31.png" class="kg-image" alt="Tagging in Gitlab CI Pipeline using Deploy Keys" srcset="https://markswanderingthoughts.nl/content/images/size/w600/2021/02/Screenshot-2021-02-04-at-19.55.31.png 600w, https://markswanderingthoughts.nl/content/images/2021/02/Screenshot-2021-02-04-at-19.55.31.png 852w" sizes="(min-width: 720px) 720px"></figure><ul><li>⚠️  Do not forget to tick the box <code>Write access allowed</code> or you will not be able to use your Deploy key to push back to the repository.</li></ul><p>After you have added this Deploy Token you can now push to your repository from your build pipeline with the SSH key configured.</p><h2 id="use-deploy-token-in-your-ci-cd-pipeline">Use deploy token in your CI/CD pipeline</h2><p>We are using <code>node:12</code> as our docker container which comes pre-loaded with <code>git</code> installed. If your runner does not have git installed you will have to do this manually yourself in your <code>.gitlab-ci.yml</code></p><p>We are going to configure git to be able to push back to the origin with our SSH key as credentials in our build stage as well as configure git to use the original committers email address  to improve traceability.</p><pre><code class="language-bash">image: node:12

"Tag version":
  stage: "Build"
    before_script:
      # put SSH key in `.ssh and make it accessible
      - mkdir -p ~/.ssh
      - echo "$SSH_PRIVATE_KEY_TOOLKIT" &gt; ~/.ssh/id_rsa; chmod 0600 ~/.ssh/id_rsa
      - echo "StrictHostKeyChecking no " &gt; /root/.ssh/config
      # configure git to use email of commit-owner
      - git config --global http.sslVerify false
      - git config --global pull.ff only
      - git config --global user.email "$GITLAB_USER_EMAIL"
      - git config --global user.name "🤖 GitLab CI/CD"
      - echo "setting origin remote to 'git@$CI_SERVER_HOST:$CI_PROJECT_PATH.git'"
      # first cleanup any existing named remotes called 'origin' before re-setting the url
      - git remote rm origin
      - git remote add origin git@$CI_SERVER_HOST:$CI_PROJECT_PATH.git
      # have the gitlab runner checkout be linked to the branch we are building
      - git checkout -B "$CI_BUILD_REF_NAME"
</code></pre><p>There are some magic variables in here: <code>CI_BUILD_REF_NAME</code> (branch name), CI_PROJECT_PATH  (repository), <code>CI_SERVER_HOST</code> (<code>gitlab.com</code> unless self-hosted) <code>GITLAB_USER_EMAIL</code> (email of commit owner) which are all made available as environment variables by default by the gitlab build pipeline. For a complete list you can find them here <a href="https://docs.gitlab.com/ee/ci/variables/">https://docs.gitlab.com/ee/ci/variables/</a></p><p>Now we are able to tag our commit and push that back to the <code>origin</code>:</p><pre><code class="language-bash">script:
    - VERSION=1.2.3
    - git tag v$VERSION -m "🤖 Tagged by Gitlab CI/CD Pipeline" -m "For further reference see $CI_PIPELINE_URL" -m "[skip ci]"
    - git push origin v$VERSION --no-verify
</code></pre><p>When doing 🧙‍♀️ magic ✨ I always like to add as much clarity and traceability as possible. This is why the commit contains the robot emoji to indicate that it was automation and also why the commit contains a traceback to the gitlab pipeline ( <code>"For further reference see $CI_PIPELINE_URL"</code>) which leverages the CI_PIPELINE_URL environment variable to create a bi-directional relationship between the commit and the point of origin where it was created.</p><p>⚠️  We use <code>[skip ci]</code> in the tags commit message to prevent a recursive build pipeline which tags itself to build and tag itself and.. well, that wouldn't be enjoyable...</p><p>You are now able to push commits/tags/.. to your repository from within your gitlab-ci pipeline! 🎉  The only thing which now remains is to add the deploy token to every repository where you want to leverage this same solution and adapt your <code>.gitlab-ci.yml</code> build pipeline.</p><h1 id="-downsides-to-this-approach">🧐 Downsides to this approach</h1><ul><li>You are uploading a SSH key which has write access to the repositories which have uploaded its accompanying public key as <code>Deploy Token</code> in Gitlab. This token is retrievable by everyone within the scope where you define it who has enough permissions (maintainer+) to manipulate the CICD settings of gitlab.</li><li>It is possible to leak the private SSH key by mistake by echo'ing the contents in your <code>.gitlab-ci.yml</code> script. It will then end up in the logs of your build pipeline.</li><li>In theory you could generate a SSH key pair per project to scope down the impact when a key gets leaked. Although that would add a lot of manual labor.</li></ul>]]></content:encoded></item><item><title><![CDATA[Git surgery to retain history]]></title><description><![CDATA[Using git surgery to graft together two repositories to retain their history? Let's go!]]></description><link>https://markswanderingthoughts.nl/git-surgery-to-retain-history/</link><guid isPermaLink="false">60098cd340170a0806f2bfec</guid><category><![CDATA[Git]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Mark van Straten]]></dc:creator><pubDate>Thu, 21 Jan 2021 14:27:59 GMT</pubDate><media:content url="https://markswanderingthoughts.nl/content/images/2021/01/image.png" medium="image"/><content:encoded><![CDATA[<img src="https://markswanderingthoughts.nl/content/images/2021/01/image.png" alt="Git surgery to retain history"><p>One of my co-workers created a Proof of Concept to transform an old code Java codebase into a mavenized setup and split it up into smaller pieces. To do so he started with copy-pasting the original files into a new git repository and slowly committing changes to in the end get the maven setup work 🎉 . Only the thing is: by doing so we had lost all git history of our original files.. 🤔  Git surgery to the rescue! 🩺</p><p>➡️ This article assumes you are quite comfortable with day-to-day usage of git and does not explain the basics.</p><h2 id="our-situation-">Our situation 😓</h2><p>original repository containing multiple applications without maven (simplified):</p><pre><code class="language-bash">README.md
JavaSource
Configuration/application-1
Configuration/application-2
Configuration/application-...n
</code></pre><p>target mavenized repository for <code>application-1</code> (simplified):</p><pre><code class="language-bash">README.md
pom.xml
mvn/
src/main/configuration/application-1 &lt;-- containing the application sourcefiles
</code></pre><p>Rough steps taken to get it working with maven:</p><ul><li>Contents of JavaSource was packaged and moved to a maven dependency</li><li>Application configuration <code>application-{1..n}</code> was split up into separate repositories, files copied into it without history</li></ul><h2 id="the-plan-">The plan 🩹🥳</h2><p>Ideally you would have changes be based on a branch of your original codebase so you can apply them as a changeset. Since that was not the case we needed to bring in the serious git tools to rewrite history to our liking.</p><ul><li>We needed to <strong>remove</strong> the files inside /<code>src/main/configuration/application-1</code> copy-pasted into the mavenized setup to prevent conflicts and confusion</li><li>We needed to extract only the history + changes of the files <code>application-1</code> in our source repository</li><li>merge the mavenized changeset over the original files so they become complete again</li></ul><p>These steps will result in retaining the full file history of all <code>application-1</code> files. Downside is that the commits where the mavenizing setup was created are not atomic - the source files are absent. This was taken as an acceptable tradeoff.</p><h2 id="put-on-your-scrubs-">Put on your scrubs 👩‍⚕️</h2><p>First things first; make two local checkouts in a working directory so we do not break anything unrelated. Because git is distributed we can do all the following operations on a local copy on your machine and only when you are satisfied push it back to your version control system (Github, Gitlab, ..)</p><pre><code class="language-bash">mkdir git-surgery &amp;&amp; cd git-surgery
git clone git@github.com:crunchie84/blogpost-git-surgery-source.git source
cd source
</code></pre><p>lets check our git history:</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2021/01/Screenshot-2021-01-21-at-11.06.10.png" class="kg-image" alt="Git surgery to retain history" srcset="https://markswanderingthoughts.nl/content/images/size/w600/2021/01/Screenshot-2021-01-21-at-11.06.10.png 600w, https://markswanderingthoughts.nl/content/images/2021/01/Screenshot-2021-01-21-at-11.06.10.png 649w"></figure><h3 id="extract-our-application-from-the-source-repository">Extract our application from the source repository</h3><p>Okay, we need to clean this up so we only have the <code>application-1</code> part which we need. To do so we can use a very handy command existing in git: <code>git substree split</code>  which allows us to extract a folder out of our repository and place only those changes in a separate branch:</p><pre><code class="language-bash">git subtree split -P Configuration/application-1 -b rewritten-history-application-1
</code></pre><p>After executing this command the history now looks as the following:</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2021/01/Screenshot-2021-01-21-at-12.08.09.png" class="kg-image" alt="Git surgery to retain history" srcset="https://markswanderingthoughts.nl/content/images/size/w600/2021/01/Screenshot-2021-01-21-at-12.08.09.png 600w, https://markswanderingthoughts.nl/content/images/2021/01/Screenshot-2021-01-21-at-12.08.09.png 747w" sizes="(min-width: 720px) 720px"></figure><p>The newly created branch <code>rewritten-history-application-1</code> only contains comimts (or the part of a commit) which involved the folder <code>Configuration/application-1</code>. It is noteworthy to observe that the new branch does not share a common ancestor with the original <code>master</code> branch because it is a total rewrite of history.</p><h3 id="prepare-our-new-repository">Prepare our new repository</h3><p>We are going to take the rewritten history of our <code>source</code> repository and use this as the <code>master</code> branch of our 'new' repository (which we are going to mavenize).</p><pre><code class="language-bash"># back to the root directory `/git-surgery`
cd ..
mkdir mavenized-solution
cd mavenized-solution
git init -b master
git pull ../source rewritten-history-application-1
</code></pre><p>We now have pulled the local filesystem based git repository <code>source</code> with specified branch rewritten-history-application-1 into our (empty) <code>master</code> branch and as a result a clean history without any dangling commits:</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2021/01/Screenshot_2021-01-21_at_12.18.25.png" class="kg-image" alt="Git surgery to retain history" srcset="https://markswanderingthoughts.nl/content/images/size/w600/2021/01/Screenshot_2021-01-21_at_12.18.25.png 600w, https://markswanderingthoughts.nl/content/images/2021/01/Screenshot_2021-01-21_at_12.18.25.png 750w" sizes="(min-width: 720px) 720px"></figure><p>A side effect of <code>subtree split</code> is that all the files in the extracted folder are placed top level in your commits which we are going to address in a bit:</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2021/01/Screenshot_2021-01-21_at_12.18.52.png" class="kg-image" alt="Git surgery to retain history" srcset="https://markswanderingthoughts.nl/content/images/size/w600/2021/01/Screenshot_2021-01-21_at_12.18.52.png 600w, https://markswanderingthoughts.nl/content/images/2021/01/Screenshot_2021-01-21_at_12.18.52.png 723w" sizes="(min-width: 720px) 720px"></figure><p>As a final preparation we are going to move the files as if they have always lived in <code>src/main/configuration/application-1</code> to make the history a bit more readable. To do so we can use <code>git filter-branch</code> to rewrite where the files have been all their lives:</p><pre><code class="language-bash"># we are in the /git-surgery/mavenized-solution folder
git filter-branch --force --prune-empty --tree-filter '
dir="src/main/configuration/application-1"
if [ ! -e "${dir}" ]
then
    mkdir -p "${dir}"
    git ls-tree --name-only $GIT_COMMIT | xargs -I files mv files "${dir}"
fi'
</code></pre><p>⚠️  You will get a big warning about the side effects and possible gotchas with filter-branch but for our task it will suffice:</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2021/01/Screenshot_2021-01-21_at_12.22.31.png" class="kg-image" alt="Git surgery to retain history" srcset="https://markswanderingthoughts.nl/content/images/size/w600/2021/01/Screenshot_2021-01-21_at_12.22.31.png 600w, https://markswanderingthoughts.nl/content/images/2021/01/Screenshot_2021-01-21_at_12.22.31.png 702w"></figure><p>Now onwards to prepare our mavenized-poc to merge onto our prepared repository!</p><h3 id="cleaning-up-the-mavenized-poc-repository">Cleaning up the mavenized PoC repository</h3><p>We are going to clone the repository and remove the copy and pasted files of <code>application-1</code> from the commit history.</p><pre><code class="language-bash">cd .. # back to the git-surgery root folder
git clone git@github.com:crunchie84/blogpost-git-surgery-poc-target.git mavenized-poc
</code></pre><p>Original history:</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2021/01/Screenshot_2021-01-21_at_12.30.29.png" class="kg-image" alt="Git surgery to retain history" srcset="https://markswanderingthoughts.nl/content/images/size/w600/2021/01/Screenshot_2021-01-21_at_12.30.29.png 600w, https://markswanderingthoughts.nl/content/images/2021/01/Screenshot_2021-01-21_at_12.30.29.png 637w"></figure><p>We are going to use the community plugin <code>filter-repo</code> which you can install using <code>brew install filter-repo</code> (on MacOS) to ease our syntax what we want to do:</p><pre><code class="language-bash">cd mavenized-poc
git filter-repo --path src/main/configuration/application-1 --invert-paths
</code></pre><p>We are filtering out all (parts of) commits which have anything to do the folder containing the copied application-1 sourcefiles. The result is a clean history of only the steps to get the mavenized setup working:</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2021/01/Screenshot_2021-01-21_at_12.34.28.png" class="kg-image" alt="Git surgery to retain history" srcset="https://markswanderingthoughts.nl/content/images/size/w600/2021/01/Screenshot_2021-01-21_at_12.34.28.png 600w, https://markswanderingthoughts.nl/content/images/2021/01/Screenshot_2021-01-21_at_12.34.28.png 635w"></figure><p>Only thing which we need to do is apply the maven setup to our extracted <code>application-1</code></p><h3 id="merging-our-cleaned-up-mavenized-setup-onto-our-application-repository">Merging our cleaned up mavenized setup onto our application repository</h3><p>We are going to re-use the <code>git pull</code> trick we have used before to get commits from a different repository into ours. But with a twist:</p><pre><code class="language-bash"># working dir = /git-surgery/mavenized-poc
git checkout -b mavenizing-application
cd ../mavenized-solution
git pull ../mavenized-poc mavenizing-application --allow-unrelated-histories
</code></pre><p>First we make a branch in our clean mavenized setup repository because the name will end up in our commit history and this is an important piece of information. The true magic resides in <code>--allow-unrelated-histories</code> Given that git pull is a short hand for <code>git fetch &amp;&amp; git merge</code> this option allows us to merge unrelated histories. Normally git will always look for a common ancestor when merging but that does not mean it can not merge without!</p><p>After invoking the <code>git pull</code> you should be presented with a git merge commit message dialog in which you can add as much extra information that you deem relevant.</p><p>Now we can observe in our git commit history what happened:</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2021/01/Screenshot_2021-01-21_at_12.38.57.png" class="kg-image" alt="Git surgery to retain history" srcset="https://markswanderingthoughts.nl/content/images/size/w600/2021/01/Screenshot_2021-01-21_at_12.38.57.png 600w, https://markswanderingthoughts.nl/content/images/2021/01/Screenshot_2021-01-21_at_12.38.57.png 703w"></figure><p>Two unrelated histories have been merged together retaining the commit history of both. All that is left is to test it out locally. When satisfied you can now (force) push it to your origin as the new repository 🎉</p><p>Surgery success, patient dismissed! 🚀</p><h2 id="parting-thoughts">Parting thoughts</h2><ul><li>It would have been easier if the proof of concept had been directly created as a branch on the original repository. When we were at this point that was already water under the bridge.</li><li>Taking the original application repository, creating a branch and then copy-pasting the mavenized proof-of-concept codebase over it was also an option but then we would loose the history of those changes ⚖️... We opted for a solution which tried to retain both histories as good as possible</li></ul><h2 id="references">References</h2><ul><li><code>filter-repo</code> community package - <a href="https://github.com/newren/git-filter-repo">https://github.com/newren/git-filter-repo</a></li><li>merging unrelated git histories - <a href="https://developpaper.com/git-pull-merges-two-branches-without-common-ancestor/">https://developpaper.com/git-pull-merges-two-branches-without-common-ancestor/</a></li><li><code>git subtree</code> explained on stack overflow - <a href="https://stackoverflow.com/questions/359424/detach-move-subdirectory-into-separate-git-repository/17864475#17864475">https://stackoverflow.com/questions/359424/detach-move-subdirectory-into-separate-git-repository/17864475#17864475</a></li></ul><p>Git repositories used as example in this blogpost</p><ul><li><a href="https://github.com/crunchie84/blogpost-git-surgery-poc-target">https://github.com/crunchie84/blogpost-git-surgery-poc-target</a></li><li><a href="https://github.com/crunchie84/blogpost-git-surgery-source">https://github.com/crunchie84/blogpost-git-surgery-source</a></li></ul><p>The complete script explained in this article:</p><ul><li><a href="https://gist.github.com/crunchie84/a070f66a3af57d918e4c345d0ec5ea9b">https://gist.github.com/crunchie84/a070f66a3af57d918e4c345d0ec5ea9b</a></li></ul>]]></content:encoded></item><item><title><![CDATA[My thoughts on AWS CodeArtifact]]></title><description><![CDATA[<p>I have taken some time to play around with AWS CodeArtifact (<a href="https://aws.amazon.com/codeartifact/" rel="noopener noreferrer">https://aws.amazon.com/codeartifact/</a>) as Intermediate/private npm package registry. It ties into the AWS ecosystem very well and provides a transparent proxy for packages of npmjs.org to your application. You can keep using your regular tooling</p>]]></description><link>https://markswanderingthoughts.nl/my-thoughts-on-aws-codeartifacts/</link><guid isPermaLink="false">5efdd21b047cca07bec5f00e</guid><category><![CDATA[aws]]></category><category><![CDATA[npm]]></category><dc:creator><![CDATA[Mark van Straten]]></dc:creator><pubDate>Thu, 02 Jul 2020 12:45:54 GMT</pubDate><media:content url="https://markswanderingthoughts.nl/content/images/2020/07/ImgHead_AWS-Code-Artifact.642506c27a2be27ae56242072b4842b1a78666b1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://markswanderingthoughts.nl/content/images/2020/07/ImgHead_AWS-Code-Artifact.642506c27a2be27ae56242072b4842b1a78666b1.jpg" alt="My thoughts on AWS CodeArtifact"><p>I have taken some time to play around with AWS CodeArtifact (<a href="https://aws.amazon.com/codeartifact/" rel="noopener noreferrer">https://aws.amazon.com/codeartifact/</a>) as Intermediate/private npm package registry. It ties into the AWS ecosystem very well and provides a transparent proxy for packages of npmjs.org to your application. You can keep using your regular tooling (<code>npm</code>) with only an additional aws-cli based login in front of it.</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2020/07/aws-codeartifact.png" class="kg-image" alt="My thoughts on AWS CodeArtifact"></figure><h2 id="one-time-setup">One time setup</h2><ol><li>Create a domain within your AWS account (often: your-company-name, or if scoped your-team-name)</li><li>Create a repository for your app, optionally linking to an upstream package source</li><li>Start using it!</li></ol><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2020/07/Screenshot-2020-07-02-at-14.29.35.png" class="kg-image" alt="My thoughts on AWS CodeArtifact"></figure><h2 id="usage">Usage</h2><p>To use your new AWS CodeArtifact package repository you can easily configure its endpoint in your <code>.npmrc</code> file globally or in your project. You can publish private packages to it yourself (unless configured otherwise in AWS)</p><!--kg-card-begin: markdown--><pre><code>export CODEARTIFACT_AUTH_TOKEN=`aws codeartifact get-authorization-token --domain my-org-name --domain-owner 12345678 --query authorizationToken --output text`

// put in the .npmrc file
registry=https://my-app-name-12345678.d.codeartifact.eu-central-1.amazonaws.com/npm/npm-store/
//my-app-name-12345678.d.codeartifact.eu-central-1.amazonaws.com/npm/npm-store/:always-auth=true
//my-app-name-12345678.d.codeartifact.eu-central-1.amazonaws.com/npm/npm-store/:_authToken=${CODEARTIFACT_AUTH_TOKEN}
</code></pre>
<!--kg-card-end: markdown--><h3 id="why-do-you-want-to-use-it">Why do you want to use it?</h3><ul><li>🔑 Private packages (internal code sharing)</li><li>📝 Auditability within AWS of package used for application code</li><li>💵 Pricing is pay-as-you-go: data stored, data in, data out so very scalable solution. instead of per-user like npmjs which starts at 35 USD/user/month</li></ul><h2 id="summary">Summary</h2><ul><li>👮‍♂️ You can create multiple repositories - one for each team (or product) -  this leads to (more) visibility (within AWS) which packages (+version) are used where (auditability: security)</li><li>🥳 You can publish private packages to your own repository within AWS CodeArtifact</li><li>🤯 You can even publish <strong>a private ‘hotfix’ version of a public npm package</strong> without having to do manual hacks in your application codebase or wait for the maintainers to merge your code</li><li>🧐 I got a 404 during <code>npm install</code> of a package - most likely because the transparent request to npmjs took longer then expected. On retry everything was fine. Not sure if this is just one-time or a common theme</li><li>👍 Upstream package sources &amp; tooling supported: maven-central, google-android, gradle-plugin, pypi and npm. </li></ul>]]></content:encoded></item><item><title><![CDATA[Thoughts on using a Chromebook as primary machine]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I have always been intrigued of the chromebook solution. Zero maintenance to the operating system and very decent hardware at an affordable pricepoint. But my concern always was &quot;will this also work as a primary personal laptop?&quot; Every time I reviewed the Chromebook option the answer was <em>Not</em></p>]]></description><link>https://markswanderingthoughts.nl/using-a-chromebook-as-main-laptop/</link><guid isPermaLink="false">5cb4bc3a986d9408fabcbf88</guid><category><![CDATA[Hardware]]></category><category><![CDATA[Product Review]]></category><dc:creator><![CDATA[Mark van Straten]]></dc:creator><pubDate>Fri, 05 Jul 2019 21:28:49 GMT</pubDate><media:content url="https://markswanderingthoughts.nl/content/images/2019/07/types-of-flowers-1520214627.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://markswanderingthoughts.nl/content/images/2019/07/types-of-flowers-1520214627.jpg" alt="Thoughts on using a Chromebook as primary machine"><p>I have always been intrigued of the chromebook solution. Zero maintenance to the operating system and very decent hardware at an affordable pricepoint. But my concern always was &quot;will this also work as a primary personal laptop?&quot; Every time I reviewed the Chromebook option the answer was <em>Not yet</em> because the only way to gain access to a terminal was to configure the device in Developer Mode effectively <em>rooting</em> the device. But lately there have been two major improvements in ChromeOS:</p>
<ul>
<li>The feature to install and use Android Apps;</li>
<li>The feature to enable first-class terminal access without having to jailbreak the device.</li>
</ul>
<p>So I decided the proof is in eating the pudding and bought a Chromebook.</p>
<h2 id="whichdevicetobuy">Which device to buy</h2>
<p>There are three big concerns in choosing the right Chromebook:</p>
<ul>
<li>Storage - Most devices only have 32/64gb SSD internal storage</li>
<li>CPU - Basic Chromebook usage does not require a hefty CPU as development would require</li>
<li><a href="https://www.reddit.com/r/Crostini/wiki/getstarted/crostini-enabled-devices">Compatibility with Crostini</a> (terminal) since not all chipsets support this</li>
</ul>
<p>If you have money to spend the Google Pixelbook is the way to go. It has enough storage, is the primary target for Crostini and a fast i7 processor. Current price is ~$1000</p>
<p>Since I was a bit more on a budget I choose the Lenovo Yoga C630 (touchscreen, 64gb SSD &amp; decent i5 processor).</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2019/07/kdQGHaZJydRxfr6S6CmxD.jpg" class="kg-image" alt="Thoughts on using a Chromebook as primary machine"></figure><!--kg-card-begin: markdown--><h2 id="sohowdidifare">So how did I fare?</h2>
<p>The trivial stuff just works(tm). You can use the GSuite for your documents and Google Drive for storing files. I was able to mount my Dropbox as another datasource in ChromeOS which made cross-device much easier. For a lot of applications you can either use their Chrome extension, website of Android application.</p>
<p>Let us dig into some more advanced usecases.</p>
<h3 id="webdevelopment">Web Development</h3>
<p>The terminal is bliss. It works as expected for doing software development using Visual Studio Code. A delighter for me was that software that you install in the Terminal gets a shortcut in ChromeOS automatically.</p>
<p>Code signing using pgp also works and is as easy as:</p>
<ul>
<li><code>gpg --gen-key</code></li>
<li><code>gpg --armor --export</code></li>
<li>Upload the gpg public key to github.</li>
<li>Configure git to utilise the signing key: <code>git config --global commit.gpgsign true</code></li>
</ul>
<p>Even though the terminal runs in a separate sandbox you are able to proxy-pass <code>localhost</code> from chrome to this sandbox environment to reach your web application.</p>
<h3 id="downloadingtorrents">Downloading torrents</h3>
<p>You can use JSTorrent as an extension in Chrome to download torrent files. It even works with magnet links too. Downsides are the download speed which was underwhelming.</p>
<p>Alternative route which I have tried is to install Deluge (or any other popular Torrent client) in the terminal but found that the terminal currently has no access to USB connected storage. So you are stuck downloading to the <code>Downloads</code> folder of your chromebook which can contain not much data. USB support is being worked on so expect this to be resolved in the upcoming months.</p>
<h3 id="cryptocurrency">Cryptocurrency</h3>
<p>Unfortunately I have not found any chromeOS compatible wallet. I tried to use my physical wallet in the form of an USB device called <a href="https://www.ledger.com/">Nano S</a> The android app for this device called <a href="https://play.google.com/store/apps/details?id=com.ledger.live&amp;hl=en">Ledger Live</a> is not able to detect the key once plugged in; most likely this is an issue with USB pass-through to Android applications from chromeOS.</p>
<p>Luckily I can trade using my phone using Android with OTG support so for now there is no urgent need to have this fixed.</p>
<h3 id="gaming">Gaming</h3>
<p>You can install Steam using the Unix support via the terminal. But at the moment hardware acceleration is not supported so playing games is not really feasible for now. This might change in the future ofcourse.</p>
<h3 id="flashingsdcards">Flashing SD Cards</h3>
<p>You can use the <a href="https://chrome.google.com/webstore/detail/chromebook-recovery-utili/jndclpdbaamdhonoechobihbbiimdgai/related">Chrome OS Recovery Utility</a> and instead of flashing a backup of ChromeOS to an SD card you can flash any arbitrary image. This allows you for instance to flash a Raspberry PI image for your own needs.</p>
<h3 id="3dprintingiotdevelopment">3D Printing, IoT Development</h3>
<p>Unfortunately since USB support of the terminal is currently not working (ChromeOS 74) a lot of applications are not useable yet. On the bright side; ChromeOS 75 should have this fixed (<a href="https://www.androidheadlines.com/2019/04/usb-support-chrome-os-75-versatile.html">https://www.androidheadlines.com/2019/04/usb-support-chrome-os-75-versatile.html</a>)</p>
<h3 id="ebookmanagement">E-Book management</h3>
<p>Previously I relied on Calibre to manage my Ebook collection and the books on my reader. I have not found a way to do so with my Chromebook for now.</p>
<h2 id="inconclusion">In conclusion</h2>
<p>Using a Chromebook as primary laptop has not been an easy decision. But it has paid of tremendously. For now I enjoy what works and have not come across a lot of usecases which I use with high frequency and are not able to do so. With the imminent release of USB support via Unix/the terminal a lot of goodness will also come to make the Chromebook offering much more versatile.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Recap of Lead Developer conference London]]></title><description><![CDATA[<p>Past week I finally attended the <a href="https://london2019.theleaddeveloper.com/">Lead Developer Conference in London</a> and what a conference it has been. Everything was provided: from quality talks, good and healthy nourishments to even provide childcare for those requiring so.</p><p>There were a lot of sessions about the soft skills involved with being a</p>]]></description><link>https://markswanderingthoughts.nl/recap-lead-developer-conference-london-2019/</link><guid isPermaLink="false">5d06b1ac986d9408fabcbf95</guid><category><![CDATA[Conferences]]></category><dc:creator><![CDATA[Mark van Straten]]></dc:creator><pubDate>Thu, 20 Jun 2019 19:44:19 GMT</pubDate><media:content url="https://markswanderingthoughts.nl/content/images/2019/06/leaddeveloper-header.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://markswanderingthoughts.nl/content/images/2019/06/leaddeveloper-header.jpg" alt="Recap of Lead Developer conference London"><p>Past week I finally attended the <a href="https://london2019.theleaddeveloper.com/">Lead Developer Conference in London</a> and what a conference it has been. Everything was provided: from quality talks, good and healthy nourishments to even provide childcare for those requiring so.</p><p>There were a lot of sessions about the soft skills involved with being a team lead which gave a lot of insights in what motivates peoples, makes people excell at their job and overall feel welcome at their team. </p><p>Some of my personal takeaways and talks which most resonated with me were the following.</p><h2 id="lara-hogan-navigating-team-friction">Lara Hogan - Navigating team Friction  </h2><p>Learn to decypher the needs of others and what motivates them. There are lot of ways to do so for instance using active listening, asking open ended questions etc. Use the <a href="https://www.palomamedina.com/biceps/">six axis of core needs</a> (Belonging, Improvement, Choice, Equality, Predictability) </p><p>Do not require change of your reports, elaborate to them what needs attention and use facts. Use something like the <a href="https://docs.google.com/document/d/1KTH4owMH8BA3NWX7i7fTmdt7cBdiLv0PBrnO7vLl4ik/edit">Feedback Equation</a> to create actionable feedback. People are capable to develop their own ideas how to solve their fallacies so ask them to voice these. "How would you like to approach this?"</p><!--kg-card-begin: html--><script async class="speakerdeck-embed" data-id="ba282d433697447b87861752dee44063" data-ratio="1.77777777777778" src="//speakerdeck.com/assets/embed.js"></script><!--kg-card-end: html--><h2 id="melinda-seckington-level-up-developing-developers">Melinda Seckington - Level Up: Developing Developers</h2><p>The analogy with how you learn to play a game vs how you grow as a developer was very striking. Topics like onboarding, starting small, practice skills, repeat your skills and having a mentor. </p><!--kg-card-begin: html--><script async class="speakerdeck-embed" data-id="d75564ab509f4e47a99102649bfec590" data-ratio="1.77777777777778" src="//speakerdeck.com/assets/embed.js"></script><!--kg-card-end: html--><h2 id="pat-kua-flavours-of-technical-leadership">Pat Kua - Flavours of technical leadership</h2><p>Pat argued that anyone can be a leader. All it takes is a single action. </p><blockquote>“To demonstrate technical leadership, you don't have to be an expert in a particular technical area. You need to understand enough to be able to facilitate the right conversations.”</blockquote><p>There are multiple ways to lead:  </p><ul><li>Knowledge cultivator (creates a learning environment)</li><li>The advocate (listens to painpoints, finds buy-in for his vision, creates the community)</li><li>The Connector (knows who you need to solve your issues)</li><li>The Storyteller</li></ul><p>There are also a lot of ways to share knowledge and let your leadship become visible. Speak at conferences, blog, interviews, book reading clubs, newsletters, meetups etc.</p><p>The last interesting thing which he discussed was career growth and the three paths you can grow towards:</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2019/06/TridentCareerModel-small.png" class="kg-image" alt="Recap of Lead Developer conference London"></figure><p>You can read more <a href="https://www.thekua.com/atwork/2019/02/the-trident-model-of-career-development/">on his own blog</a> about the trident model of career development.</p><h2 id="neha-batra-facilitation-techniques-202">Neha Batra - Facilitation techniques 202</h2><p><a href="https://github.com/nerdneha">Neha</a> shared a lot of practical experience how to facilitate sessions:</p><ul><li>Have a clear agenda with the goals, timeline, expectations when to speak &amp; when you can have a break (recharge);</li><li>Engage the group with pairing activities, sharing of input and creating individual contributions;</li><li>Make sure every personality can make their voice heard (Steam Rollers vs Quiet Folks);</li><li>Finish with a positive bang!</li></ul>]]></content:encoded></item><item><title><![CDATA[Upgrading Kubernetes Nodes without disruptions]]></title><description><![CDATA[Upgrading your kubernetes cluster is a process not to be taken lightly. In this article we will shine a light on a recent outage we experienced while doing so and what we have learned from this.]]></description><link>https://markswanderingthoughts.nl/upgrading-kubernetes-nodes-without-disruptions/</link><guid isPermaLink="false">5c53716be15c460755f5226f</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Gke]]></category><dc:creator><![CDATA[Mark van Straten]]></dc:creator><pubDate>Thu, 07 Feb 2019 08:45:35 GMT</pubDate><media:content url="https://markswanderingthoughts.nl/content/images/2019/02/beach-sea-coast-water-nature-ocean-1205552-pxhere.com.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://markswanderingthoughts.nl/content/images/2019/02/beach-sea-coast-water-nature-ocean-1205552-pxhere.com.jpg" alt="Upgrading Kubernetes Nodes without disruptions"><p>The Philips Hue cloud infrastructure is running on Google Container Engine (Kubernetes) since 2014. We have done many Kubernetes upgrades without incidents but I want to share with you the learnings we have attained in our most recent upgrade which failed spectacularly:</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Currently we are having an issue with remote connectivity (Out of Home, voice commands), and we’re working hard to resolve it ASAP. Your local connection via Wi-Fi is not affected by this. We’ll keep you posted!</p>&mdash; Philips Hue (@tweethue) <a href="https://twitter.com/tweethue/status/1080867645858164736?ref_src=twsrc%5Etfw">January 3, 2019</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><p>It resulted in a production outage which took multiple hours to resolve. But to understand what went wrong and what we have learned from it we first need to discuss a bit how we normally handle Kubernetes upgrades.</p><h2 id="new-kubernetes-release-now-what">New Kubernetes release, now what?</h2><p>Every 12 weeks a new version of Kubernetes is released to the stable channel. It takes some time for Google to produce a GKE compatible version which is mature enough to be used in production. </p><p>Our regular approach is to wait for a bit longer after the new GKE release gets available to let the dust settle. If no bugs are raised by other Google customers we will start our upgrade by <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/upgrading-a-cluster#upgrade_master">manually upgrading the master node</a> and after this is completed we will follow suit with the upgrade of our worker nodes. This process is tested on our development clusters before being executed on the Philips Hue production environment.</p><h2 id="on-a-beautiful-morning-in-january-2019">On a beautiful morning in January 2019</h2><blockquote>I'm going to upgrade the production environment's nodepool today. Minor impact (elevated error rate) is to be expected due to application pods getting cycled excessively.</blockquote><p>Famous last words. We started the upgrade by using the GKE UI and have it be done in an automated fashion by Google. It would remove one VM from the nodepool, upgrade it to Kubernetes 1.10 and return the new VM into the nodepool until all machines had been rotated. Due to the size of our cluster (~280 machines) this process takes hours. </p><p>After a few hours we observed an increase in message delivery failures to one of our partners (Amazon Alexa) which seemed to only increase. In hindsight we now know that this was caused by <em>kube-dns</em> somehow starting to get DNS requests from within our cluster but only having three pods effectively DDoS'ing ourselves with DNS requests.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://markswanderingthoughts.nl/content/images/2019/02/Screenshot-2019-02-06-at-10.22.00-PM.png" class="kg-image" alt="Upgrading Kubernetes Nodes without disruptions"><figcaption>uh-oh, that can't be good?</figcaption></figure><p>We aborted the roll-out of the new nodes and after some discussion decided to undo our partial roll-out by creating a second nodepool with 150 nodes of the original Kubernetes version and then move the workload back to these nodes, scaling further up when needed. </p><p>The wheels were set in motion. We created the nodepool with our previous kubernetes version nodes and waited for all nodes to become healthy. Only once they came online our master became unresponsive! Even though the master should only be used for the management control plane our metrics started to indicate that things just became dramatically worse, we faced a major outage. #panic</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://markswanderingthoughts.nl/content/images/2019/02/Screenshot-2019-02-05-at-7.47.06-AM.png" class="kg-image" alt="Upgrading Kubernetes Nodes without disruptions"><figcaption>There goes the error budget for 2019</figcaption></figure><p>It took us multiple hours together with help from Google Support to get our GKE master node back online, deduce what caused the original error rate to go up and have our service get stable again. Unlucky achievement unlocked; get Google support engineers on our case from every geographical timezone to receive round-the-clock support.</p><h2 id="post-mortem">Post Mortem</h2><p>After the dust had settled we came to the conclusion that our cluster was pretty old - created somewhere in 2016 causing our DNS configuration to be incorrect. Due to the upgrade of the nodes this configuration became somehow corrected which caused DNS lookups to now hit <em>kube-dns</em> first - which was only running with three pods and became overloaded. Scaling kube-dns from three to 110 pods resolved a lot of that issue together with properly <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configure-stub-domain-and-upstream-dns-servers">configuring stubDomains and upstreamNameservers</a>.</p><p>To rollback our upgrade we created a second nodepool which (in hindsight) was of sufficient size to have our master node<a href="https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components"> be automatically upgraded</a> by Google to accommodate such amount of nodes. This made the master be unresponsive for some time and when it finally came back receive a flood of events to process and propagate to all nodes causing it to be even longer unresponsive.</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2019/02/Tsunami_by_hokusai_19th_century.jpg" class="kg-image" alt="Upgrading Kubernetes Nodes without disruptions"></figure><p>What did not help in this flood of events is that one of our services contained a lot of backends (&gt; 500 pods) - causing the size and propagation of iptables to become cumbersome. This is one of the many dimensions in which Kubernetes scalability can become an issue as discussed in <a href="https://schd.ws/hosted_files/kccna18/92/Kubernetes%20Scalability_%20A%20multi-dimensional%20analysis.pdf">"Kubernetes Scalability: a multi-dimensional analysis"</a> (Presented at Kubecon 2018):</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://markswanderingthoughts.nl/content/images/2019/02/ink.png" class="kg-image" alt="Upgrading Kubernetes Nodes without disruptions"><figcaption>Our number of backends/service was way outside of the guaranteed scalability matrix</figcaption></figure><p>In the end we were able to resolve our outage and have our cluster become stable again. In doing so we have learned a lot about scalability and performance characteristics of our cluster. In the weeks since this outage we have been able to successfully upgrade our nodes to Kubernetes 1.10 but this again surprised us in many ways. Stay tuned for part 2 in this series!</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Update: Good news – the recently reported remote connectivity issue should be resolved. We&#39;ll monitor the service closely to ensure the fix is permanent. Thank you for your patience!</p>&mdash; Philips Hue (@tweethue) <a href="https://twitter.com/tweethue/status/1080951966753345543?ref_src=twsrc%5Etfw">January 3, 2019</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure>]]></content:encoded></item><item><title><![CDATA[Battle of the blogs: Where to go next]]></title><description><![CDATA[It took me quite some time to decide which platform to start using for my blog. In this article I discuss the options I weighted and why I chose Ghost as my blogging platform.]]></description><link>https://markswanderingthoughts.nl/battle-of-the-blogs-medium-vs-tumblr-vs-ghost/</link><guid isPermaLink="false">5c3fa8eb7b83a64fb16277bb</guid><category><![CDATA[Public Speaking]]></category><dc:creator><![CDATA[Mark van Straten]]></dc:creator><pubDate>Thu, 31 Jan 2019 22:05:15 GMT</pubDate><media:content url="https://markswanderingthoughts.nl/content/images/2019/01/PIXNIO-201928-4924x3283.jpeg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://markswanderingthoughts.nl/content/images/2019/01/PIXNIO-201928-4924x3283.jpeg" alt="Battle of the blogs: Where to go next"><p>I had this coming for a long time. My blog was originally hosted on <a href="https://tumblr.com">Tumblr.com</a>. When I wrote articles I would utilised the public folder of Dropbox to upload and host my images. This worked beautifully until <a href="https://www.dropbox.com/help/files-folders/public-folder">they disabled this feature in september 2017</a> effectively making all my images become broken. Combined with the sub-optimal editor of Tumblr this suggested it was time to move. Time to search for an alternative. Enter the battle of the blogs.</p>
<p>If you Google <a href="https://www.google.com/search?q=best%20blogging%20platform%202019">&quot;Best blogging platform 2019&quot;</a> the shortlist will include <a href="https://tumblr.com">Tumblr</a>, <a href="https://wordpress.com">WordPress</a>, <a href="https://medium.com">Medium</a> and much lower on the list <a href="https://ghost.org">Ghost</a>.</p>
<h2 id="requirements">Requirements</h2>
<ul>
<li>Good writing experience</li>
<li>Free or low cost</li>
<li>Ability to write in Markdown</li>
<li>Copyright of my thoughts retained</li>
<li>Bonus: ability to import my current articles from tumblr.com</li>
</ul>
<h2 id="wordpress">WordPress</h2>
<p>Almost everybody knows Wordpress. You can download and host the software yourself which also make you responsible for upgrades &amp; backups to prevent data loss. If you do not want to do this you can also take a subscription for as little as $5 a month to have wordpress.com do this so you can focus on writing high-quality content.</p>
<p>Due to the large install base it is also a big target for <a href="https://www.cvedetails.com/product/4096/Wordpress-Wordpress.html?vendor_id=2337">CVE exploits</a> so I decided to take the Paas offering for a spin. Luckily, getting some content into it was trivial because they support <a href="https://en.support.wordpress.com/import/import-from-tumblr/">importing your blog from Tumblr</a>.</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2019/01/Screenshot-2019-01-30-at-09.15.13.png" class="kg-image" alt="Battle of the blogs: Where to go next"></figure><!--kg-card-begin: markdown--><p>But the downside is the editor which was still a pretty basic HTML WYSIWYG editor. <em>UPDATE</em> - After writing this blog I found out that they now have a blog based editor alike Medium and Ghost called <a href="https://wordpress.org/gutenberg/">Guttenberg</a> which was not enabled by default in their hosted offering.</p>
<h2 id="medium">Medium</h2>
<p>Medium is a big platform with a large userbase, cross-referring blog articles to keep users engaged and give your articles more exposure. The downsides of this are that the brand is Medium so they first made it <a href="https://medium.com/benjamin-dada/medium-now-charges-75-to-set-up-a-custom-domain-54ab54117fd4">difficult</a> and finally <a href="https://help.medium.com/hc/en-us/articles/115003053487-Custom-Domains-service-deprecation">unable</a> to have your own custom URL. The UI is also very much fixed and you as an editor have very little influence over this. But the writing experience is o so sweet. It is the sirens lure of blog writing.</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://markswanderingthoughts.nl/content/images/2019/01/Screenshot-2019-01-31-at-22.18.19.png" class="kg-image" alt="Battle of the blogs: Where to go next"><figcaption>Sweet Editing</figcaption></figure><p>In the end I chose not to go with Medium. There has already been written a great deal by others why Medium in the end might not be the best place to put your content. For example:</p><ul><li><a href="https://sendcheckit.com/blog/why-you-should-put-your-content-on-medium-and-your-own-domain">https://sendcheckit.com/blog/why-you-should-put-your-content-on-medium-and-your-own-domain</a></li><li><a href="https://m.signalvnoise.com/signal-v-noise-exits-medium/">https://m.signalvnoise.com/signal-v-noise-exits-medium/</a></li></ul><p>But the final straw which resonated deeply with me was <a href="https://www.hanselman.com/blog/YourWordsAreWasted.aspx">"<em>OWN YOUR WORDS</em>"</a>. If you let somebody have control over where you write they effectively have control over what you write.</p><h2 id="ghost">Ghost</h2><p>Finally I reviewed <a href="https://ghost.org">Ghost</a> and found out that they not only have paid tiers but the software is actually <a href="https://docs.ghost.org/install/ubuntu/">open source and you can host it yourself</a>. It supports the same nice editing experience as Medium does and even has a <a href="https://ghost.org/downloads/">native App</a> to write articles on your Mac or even on your mobile device on the go.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://markswanderingthoughts.nl/content/images/2019/01/Screenshot-2019-01-31-at-23.02.13.png" class="kg-image" alt="Battle of the blogs: Where to go next"><figcaption>Mac App to edit articles</figcaption></figure><p><a href="https://www.ghostforbeginners.com/how-to-import-blog-posts-from-tumblr-to-ghost/">Importing my existing content into Ghost</a> was not trivial because in Tumblr the articles are stored as simple objects while Ghost uses the rich block-based approach. But having the Titles and metadata like creation date &amp; tags is enough to polish my old content back into shape in the upcoming weeks.</p><p>After researching a lot about Ghost and also discovering that I would be able to <a href="https://medium.com/@stevenc81/self-hosted-ghost-blog-on-gcp-free-tier-d91cd38b3bc4">host it for free using a Google Cloud Platform</a> virtual machine I took the leap. And you are reading the results. Welcome on my new blog, hosted by Ghost.</p>]]></content:encoded></item><item><title><![CDATA[W00tcamp 2018 - Augmenting the sliding experience]]></title><description><![CDATA[Interested how we augmented the slide at my office with a LED & sound effect? Read more about the adventure in this blog.]]></description><link>https://markswanderingthoughts.nl/augmenting-my-offices-slide-with-lights-sound/</link><guid isPermaLink="false">5c376e0a1ffa7f0d11a38a6f</guid><category><![CDATA[Internet of Things]]></category><category><![CDATA[Hackathon]]></category><category><![CDATA[3d printing]]></category><dc:creator><![CDATA[Mark van Straten]]></dc:creator><pubDate>Sun, 30 Dec 2018 21:24:00 GMT</pubDate><media:content url="https://markswanderingthoughts.nl/content/images/2019/02/IMG_20181109_120929.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://markswanderingthoughts.nl/content/images/2019/02/IMG_20181109_120929.jpg" alt="W00tcamp 2018 - Augmenting the sliding experience"><p>Every year <a href="https://q42.com">Q42</a> organises a hackathon - <a href="https://w00t.camp"><em>W00tcamp</em></a>. We take two days with all colleagues, invite some of our friends, form teams and try to create something that will amaze you, make you laugh but above all will be an awesome learning experience. This year I enhanced the slide at our The Hague office to have sound &amp; light effects. Read this blog for more details how it came to be.</p><h2 id="how-do-you-approach-a-hackathon">How do you approach a hackathon?</h2><blockquote>The goal at W00tcamp is to present your project to a jury at the end of the second day. It will be judged based on level of polish, impact &amp; business value.</blockquote><p>What I have learned from taking part in multiple hackatons and w00tcamps in the past years is that finishing something in just two days is <strong><u>har</u>d</strong>. Taking the time after the hackathon ends to deliver a working result is <u>even harder</u>. This is the reason why I always have very clear requirements for the project which I bring forwards or team I'll join:</p><ul><li>Focus on building the MVP (you can always expand);</li><li>Treat the night (aka party time 🍻) as if you are working overtime;</li><li>New tech to explore? Do a spike before the hackathon;</li><li>Going to use hardware? Make sure you have all components and did a PoC;</li><li>Slim down the MVP to have the gut feeling it is doable in one day (it never is).</li></ul><h2 id="lets-talk-about-augmenting-the-slide">Lets talk about augmenting the slide</h2><p>A few years back I played around with a sonar sensor + raspberry pi + camera to have the slide in our Amsterdam office take a picture when you slide. You can visit <a href="http://www.superslideqam.nl/about.html">http://www.superslideqam.nl</a> for a pretty detailed write up how it came to be. This gave me the trust that we could do something alike for our The Hague office in a similar time frame.</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2019/02/LED-STRIP-LIGHTS.jpg" class="kg-image" alt="W00tcamp 2018 - Augmenting the sliding experience"></figure><p>So for this years W00tcamp I wanted to enhance the sliding experience at our office in The Hague with Lights &amp; Sound effects. You can find the <a href="https://w00t.camp/project/2018/special-effects-voor-de-glijbaan">initial pitch here (dutch)</a>.</p><h2 id="the-idea">The idea</h2><blockquote>What if you take the slide and by doing so trigger sound &amp; light effects?</blockquote><p>We had all sort of additional ideas if the time would allow it. We wanted to have a start &amp; end sensor to measure your speed. We thought about using an iPad to capture a video of the slide and use facial recognition to make a high score per user (anonymous or mapped to known colleagues). But all these ideas were prioritised as non-mvp.</p><h2 id="hardware-required">Hardware required</h2><p>We needed to get a lot of components from local vendors or AliExpress which takes anywhere between 14-60 days to arrive:</p><ul><li>LED strip of 5m @ 60 LEDS/meter (ws2812b chipset)</li><li>Power supply for led strip (60mA*5m*60 -&gt; 18A@5v) =&gt; 100W220v</li><li>1000uf capacitator (to prevent damaging LEDs on power surges)</li><li>Raspberry Pi for sound effects </li><li>Simple speaker</li><li>Laser</li><li>Laser-diode to receive signal</li><li>Arduino Uno for sensor readings + LED animations</li></ul><p>Note; Initially we used an Arduino for the Analog sensor readings + LED animations. A few weeks after W00tcamp we also found that we could directly plug the sensor + LED strip in the Raspberry Pi to further simplify the design using an <a href="https://www.sparkfun.com/products/8636">Analog to Digital converter chip (MCP3002)</a>.</p><h2 id="designing-the-hardware">Designing the hardware</h2><p>To determine that somebody is using the slide we made a simple tripwire by having  a laser reflect back to a diode at the entrance of the slide:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://markswanderingthoughts.nl/content/images/2019/02/ezgif.com-video-to-gif.gif" class="kg-image" alt="W00tcamp 2018 - Augmenting the sliding experience"><figcaption>Triggering the tripwire</figcaption></figure><p></p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2019/02/IMG_20181109_183732.jpg" class="kg-image" alt="W00tcamp 2018 - Augmenting the sliding experience"></figure><h2 id="prototyping-the-casing-mirror-assembly">Prototyping the casing &amp; mirror assembly</h2><p>We created a lot of iterations of the casing containing the laser and diode as well as the kinetic mirror mount to reflect the laser back to create the tripwire. Luckily we have a 3d printer at our office. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://markswanderingthoughts.nl/content/images/2019/02/IMG_20181127_091101.jpg" class="kg-image" alt="W00tcamp 2018 - Augmenting the sliding experience"><figcaption>Hardware prototypes</figcaption></figure><p>For reflecting the laser we created our own kinetic mirror mount so that we could adjust the angle at which the laser beam was reflected to have it directly hit the diode. These are pretty expensive to buy but again the 3d printer was a life saver.</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2019/02/IMG_20190215_141919.jpg" class="kg-image" alt="W00tcamp 2018 - Augmenting the sliding experience"></figure><p>The kinetic mirror mount design my team member created is hosted on <a href="https://www.thingiverse.com/thing:3216112">Thingiverse</a> so that you can use this for your own project.</p><h2 id="the-result">The result</h2><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2019/02/ezgif.com-gif-maker.gif" class="kg-image" alt="W00tcamp 2018 - Augmenting the sliding experience"></figure><p>#success!</p>]]></content:encoded></item><item><title><![CDATA[Kubernetes workshop at EV-Box]]></title><description><![CDATA[EVBox is a producers of electric vehicle charging stations and has these come bundled with a managed software solution. 

Given the fact that the Philips Hue cloud is somewhat alike (hardware + cloud offering) they reached out to us to share our knowledge during a workshop.]]></description><link>https://markswanderingthoughts.nl/kubernetes-workshop-at-ev-box/</link><guid isPermaLink="false">5c59f44c25fd054a8e15e859</guid><category><![CDATA[Public Speaking]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Istio]]></category><category><![CDATA[Workshop]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Mark van Straten]]></dc:creator><pubDate>Sun, 09 Dec 2018 14:58:00 GMT</pubDate><media:content url="https://markswanderingthoughts.nl/content/images/2019/02/elvi-thumb3.f6a561cb.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://markswanderingthoughts.nl/content/images/2019/02/elvi-thumb3.f6a561cb.jpg" alt="Kubernetes workshop at EV-Box"><p>EVBox is a producers of electric vehicle charging stations and has these come bundled with a managed software solution. Originally founded in 2010 in the Netherlands they are experiencing a tremendous growth. This has also resulted in them needing to redesign their cloud infrastructure to scale towards the future. </p><p>Given the fact that the Philips Hue Cloud is somewhat alike (hardware + cloud offering) and also being ran on Kubernetes they reached out to us to request a workshop to address specific needs and get their engineering team up to speed.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://markswanderingthoughts.nl/content/images/2019/02/IMG_20181107_084448.jpg" class="kg-image" alt="Kubernetes workshop at EV-Box"><figcaption>Got to see the Formula-E racecar in real life!</figcaption></figure><h2 id="challenges">Challenges</h2><p>Given the fact that EVBox is expanding rapidly they have experienced a lot of growing pains in their DevOps way of working. Getting code shipped to production became more difficult and assessing that it is working as expected using metrics instead of tapping logs quickly becomes cumbersome. </p><p>We have split the day into two parts: the morning was geared towards broad Kubernetes knowledge, how to use it and be able to observe the system to decide upon the right course of action. This to facilitate the <a href="https://golean.io/blog/2018/03/28/introduction-to-ooda-continuous-governance-framework/">OODA loop</a> in the development process using Kubernetes.</p><figure class="kg-card kg-image-card"><img src="https://markswanderingthoughts.nl/content/images/2019/02/ooda-loop.png" class="kg-image" alt="Kubernetes workshop at EV-Box"></figure><p>In the afternoon we had more specific topics to discuss like how to do global rollouts (multi-region), improve network-based security using networkpolicies and ofcourse discuss the hot and upcoming: <a href="https://www.youtube.com/watch?v=8OjOGJKM98o">Istio service-mesh</a>. What it contains and why you want to use it.</p><p>All in all we had a great day of sharing knowledge, discussing unsolved issues and how to approach this. It was great to see the passion in the team at EVBox and quality they strive to deliver.</p><h2 id="slides">Slides</h2><figure class="kg-card kg-embed-card"><iframe id="talk_frame_495857" src="//speakerdeck.com/player/c8989004328541cebe544e12c772c9fa" width="710" height="399" style="border:0; padding:0; margin:0; background:transparent;" frameborder="0" allowtransparency="true" allowfullscreen="allowfullscreen" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>
</figure><p>Got interested in the challenges that EVBox is solving? They are growing rapidly and can always <a href="https://evbox.com/about/careers/software/#jobs">use skilled software engineers</a>.</p>]]></content:encoded></item><item><title><![CDATA[Team Philips Hue goes on coderetreat]]></title><description><![CDATA[The Philips Hue team took a day long coderetreat in which they went through multiple iterations of Conways game of life]]></description><link>https://markswanderingthoughts.nl/team-philips-hue-goes-on-a-code-retreat/</link><guid isPermaLink="false">5c376a5e1ffa7f0d11a38a43</guid><category><![CDATA[Software Development]]></category><category><![CDATA[Coderetreat]]></category><category><![CDATA[Philips Hue]]></category><category><![CDATA[Workshop]]></category><dc:creator><![CDATA[Mark van Straten]]></dc:creator><pubDate>Fri, 07 Jul 2017 14:53:00 GMT</pubDate><media:content url="https://markswanderingthoughts.nl/content/images/2019/01/conwaysgameoflife.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://markswanderingthoughts.nl/content/images/2019/01/conwaysgameoflife.jpg" alt="Team Philips Hue goes on coderetreat"><p>Because we always want to improve our coding skills and the Philips Hue team has been growing significantly in the last 3 months we decided it was time to flex our developer muscles and practice on our teamwork. A great way to do so is during a Coderetreat which i was glad to be able to facilitate for my team.</p><blockquote><em>A coderetreat is a day-long, intensive practice event, focusing on the fundamentals of software development and design.</em></blockquote><p>During a coderetreat multiple sessions will be held in which people form pairs and try to implement a problem, most times <a href="https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life" rel="noopener">Conway’s game of life</a>. Each session just takes 45 minutes and will enforce certain restrictions to get you really <em>out of your comfort zone</em>. And the best thing is that when the session ends you <em>throw away</em> the code to absolve you of your moral obligations to it. People tend to find this really hard during the first session but will get comfortable with it during the day. Then to wrap up a session we do a short retrospective to share our experiences, take a break for coffee and hack away at the next session!</p><h3 id="schedule-of-the-day">Schedule of the day</h3><p>Being it that most of my team had never done a coderetreat before i decided upon the following constraints per session:</p><ol><li>Getting to know the <a href="https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life#Rules" rel="noopener">problem domain</a> &amp; <a href="http://blog.cleancoder.com/uncle-bob/2014/12/17/TheCyclesOfTDD.html" rel="noopener">TDD</a> using <a href="http://coderetreat.org/facilitating/activities/ping-pong" rel="noopener">Ping-Pong</a> in which person A writes a test and person B implements it before switching roles;</li><li>Force people to <em>think small</em> using <a href="http://coderetreat.org/profiles/blogs/taking-baby-steps" rel="noopener">Baby-Steps</a> in which we restrict ourselves to create a test and implement its code in <em>just 2 minutes</em> or revert all changes to try again;</li><li>Challenge ourselves to <em>write very explicit tests</em> using <a href="http://blog.adrianbolboaca.ro/2013/10/pair-programming-game-silent-programming/" rel="noopener">Mute Evil Coder</a> where no speaking is allowed and the test may be implemented as evil as possible before writing the next test while still trying to get all business rules implemented;</li><li><em>Experience legacy code</em> by making the pairs implement as fast as possible without tests before reorganizing the pairs and making them fix everything.</li></ol><p>And to wrap things up we did a group programming exercise called <a href="https://www.agilealliance.org/pair-programming-versus-mob-programming/" rel="noopener">Mob Programming</a> in which everybody works together with only one computer to implement the solution. We took a different coding kata about a LCD panel to have a fresh challenge:</p><!--kg-card-begin: html--><script src="https://gist.github.com/crunchie84/4aa7c1c29e94aef4f28a1f3a9efedc92.js"></script><!--kg-card-end: html--><p>While mobbing one person is the driver at the computer and only may write code as directed by the navigator. All other persons are co-navigators / researchers. Every few minutes switch roles.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://markswanderingthoughts.nl/content/images/2019/01/mob-programming-in-action.jpeg" class="kg-image" alt="Team Philips Hue goes on coderetreat"><figcaption>Mob programming in action (photo: Mark van Straten)</figcaption></figure><h3 id="learnings">Learnings</h3><p>We had a blast doing the code retreat. Of all the things we learned the following were most noteworthy:</p><ul><li>We experienced that using a programming language we aspire to use (Go) was hindering our abilities to reach the sessions goals and thus we reverted towards NodeJs in which everybody is fluent;</li><li>Restricting yourself to just 2 minutes is very frustrating at first; people really hated reverting their code. But still almost everybody found small enough steps to make progress;</li><li>Most people were initially hesitant that they could get anything done within 2 minutes but found out that it was actually pretty easy;</li><li>The Mute Evil Coder session ended up being a true game of chess for some in which all tests passed but nothing worked as expected;</li><li>Randomisation of tests was a big solution to sidestep evil coding practices;</li></ul><p>But above all: <strong><strong>everybody</strong> </strong><strong><strong>learned a lot</strong></strong> and got to <strong><strong>know each other much better</strong></strong>. Mission accomplished!</p>]]></content:encoded></item><item><title><![CDATA[How I do testing of RxJs4 and RxJs5 code]]></title><description><![CDATA[This blogpost documents my search for a solution of a way to keep testing in RxJs5 simple.]]></description><link>https://markswanderingthoughts.nl/how-i-do-testing-of-rxjs4-and-rxjs5-code/</link><guid isPermaLink="false">5c13db791ffa7f0d11a388fb</guid><category><![CDATA[Rxjs5]]></category><category><![CDATA[Rxjs4]]></category><category><![CDATA[Testing]]></category><category><![CDATA[Rxjs]]></category><category><![CDATA[reactive-programming]]></category><dc:creator><![CDATA[Mark van Straten]]></dc:creator><pubDate>Sat, 21 Jan 2017 20:50:40 GMT</pubDate><media:content url="https://markswanderingthoughts.nl/content/images/2018/12/victoria_falls.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://markswanderingthoughts.nl/content/images/2018/12/victoria_falls.jpg" alt="How I do testing of RxJs4 and RxJs5 code"><p>Since starting to use RxJs I have been searching for the best and easiest approach for testing. After getting a setup which worked pretty fine they introduced RxJs5 which is largely incompatible with their previous testing approach. This blogpost documents my search for a solution of a way to keep testing in RxJs5 simple.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="testinginrxjs4">Testing in RxJs4</h2>
<p>The <a href="https://github.com/Reactive-Extensions/RxJS/blob/master/doc/gettingstarted/testing.md">documentation</a> about how to test with RxJs4 is pretty good. Depending on the case i tend to take one of two roads to testing Rx code: dumb it down to promises or full-fledged testing using the TestScheduler.</p>
<h3 id="usingpromises">using Promises</h3>
<p>Depending on your needs you can cast the Rx Observable stream <code>toPromise()</code> and verify the output. The pro of this approach is that not much scaffolding is required. The con is that you do not have access to the timings of emissions.</p>
<pre><code class="language-lang-js">it('can be done using .toPromise()', () =&gt; {
  const rxPromise = Rx.Observable.from([2,4,6,8])
    .filter(val =&gt; val &gt; 4)
    .toArray() /* because only the last value or error will be passed to .toPromise() */
    .toPromise();

  return rxPromise
    .then(
      res =&gt; expect(res).to.equal([6,8]),
      err =&gt; { throw new Error(`expected result but got error: ${err.message}`)}
    );
});
</code></pre>
<p>By returning the promise to the <code>it()</code> operator you let the test framework (in my case Mocha) handle the promise without having to pass the <code>done</code> callback to your tests and invoke that yourself.</p>
<h3 id="usingthetestscheduler">Using the TestScheduler</h3>
<p>You can also create an instance of the <a href="https://github.com/Reactive-Extensions/RxJS/blob/master/doc/api/testing/testscheduler.md">TestScheduler</a> which gives you acces to virtual time testing. This is also required when you start doing tests with operators which are time-bound like <code>.Delay</code> or <code>.Interval</code>. All these operators take an optional <code>scheduler</code> as last argument for you to replace the default implementation.</p>
<p>Because the testScheduler invokes your Rx stream in virtual time you will need to advance it yourself procedurally or call <a href="https://github.com/Reactive-Extensions/RxJS/blob/master/doc/api/testing/testscheduler.md#rxtestschedulerprototypestartschedulercreate-settings"><code>.startScheduler</code></a> and let it run its course:</p>
<pre><code class="language-lang-js">const onNext = Rx.ReactiveTest.onNext;
const onCompleted = Rx.ReactiveTest.onCompleted;

it('can be done using the TestScheduler', () =&gt; {
  const scheduler = new Rx.TestScheduler();

  const results = scheduler.startScheduler(
    () =&gt; Rx.Observable.interval(100, scheduler).take(3),
    { created: 100, subscribed: 200, disposed: 1000 } /* note, this are the default settings and i only include these for clarity */
  );

  collectionAssert.assertEqual(res.messages, [
    onNext(200 + 100, 0),
    onNext(200 + 200, 1),
    onNext(200 + 300, 2),
    onCompleted(200 + 300)
  ]);
});
</code></pre>
<p>The <code>collectionAssert</code> basic implementation <a href="https://github.com/Reactive-Extensions/RxJS/blob/master/doc/api/testing/testscheduler.md#usage">can be found in the RxJs4 documentation</a></p>
<h2 id="testinginrxjs5therxjs4way">Testing in RxJs5 the RxJs4 way</h2>
<p>When you migrate your codebase from RxJs4 towards 5 you will find out that a lot of things have been moved, renamed and above all that the implementation of the TestScheduler is no longer available. RxJs contributor kwonoj has created a <a href="https://www.npmjs.com/package/@kwonoj/rxjs-testscheduler-compat">compatibility shim to help migration towards RxJs5</a>. You can install it using npm <code>npm install @kwonoj/rxjs-testscheduler-compat</code>. Not all features of the TestScheduler are implemented but the most important <code>.startScheduler</code> is working.</p>
<pre><code class="language-language-javascript">const TestScheduler = require('@kwonoj/rxjs-testscheduler-compat').TestScheduler;
const next = require('@kwonoj/rxjs-testscheduler-compat').next;
const complete = require('@kwonoj/rxjs-testscheduler-compat').complete;

it('works in RxJs5 with the compat package', () =&gt; {
  const scheduler = new TestScheduler(); // Note; no longer the Rx.TestScheduler

  const results = scheduler.startScheduler(
    () =&gt; Rx.Observable.interval(100, scheduler).take(3),
    { created: 100, subscribed: 200, unsubscribed: 1000 } // NOTE: disposed is now renamed to unsubscribed
  );

  collectionAssert.assertEqual(res.messages, [
    next(200 + 100, 0),
    next(200 + 200, 1),
    next(200 + 300, 2),
    complete(200 + 300)
  ]);
});
</code></pre>
<p>I have adapted my own version of the <code>collectionAssert</code> to work with RxJs5 in combination with the compatiblity package and <a href="https://gist.github.com/crunchie84/82dc64060dcb429704d8beb03e3a1d9b#file-rx-collection-compare-js">is available as a gist</a></p>
<h2 id="testinginrxjs5usingthenewmarbletestingsyntax">Testing in RxJs5 using the new Marble testing syntax</h2>
<p>The RxJs team has introduced <a href="https://github.com/ReactiveX/rxjs/blob/master/doc/writing-marble-tests.md#anatomy-of-a-test">marble testing syntax</a> to more visually define how your operator or custom code should operate.</p>
<pre><code>    var e1 = hot('----a--^--b-------c--|');
    var e2 = hot(  '---d-^--e---------f-----|');
    var expected =      '---(be)----c-f-----|';

    expectObservable(e1.merge(e2)).toBe(expected);
</code></pre>
<p>At the time of writing this post they have not yet made this approach really easy to use outside of the RxJs5 library itself. There are <a href="https://github.com/ngrx/store/blob/master/spec/helpers/marble-testing.ts">implementations available</a> to see how to do it yourself. You can also look around in the <a href="https://github.com/ReactiveX/rxjs/blob/147ce3e4c36807b5c61c7e82ccfff1490eed54ff/.markdown-doctest-setup.js">codebase of RxJs5</a> to see how to setup your testing framework to do your own marble tests. There is an open <a href="https://github.com/ReactiveX/rxjs/issues/1791">issue about documenting testing with RxJs5</a>. I have not yet succeeded to get my testing framework setup to do marble testing in this way.</p>
<h2 id="conclusions">Conclusions</h2>
<p>I would love to start using the marble diagram syntax for my testing because they are less verbose in the setup and communicate the emission of values over time way better than the emissions array filled with <code>onNext</code>/<code>onError</code>/<code>onCompleted</code> statements in RxJs4. But for now it seems just a bit out of reach.</p>
<p>Since the compatibility shim is a great workaround i will stay using it to write tests for RxJs5 as i did for RxJs4 until the marble diagram syntax becomes easier to start using.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Building your own horizontal pod autoscaler for Kubernetes]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>The current version of Kubernetes (1.3) is quite packed with features to have your containerized application run smoothly in production. Some features are still a bit minimal viable product like the scaling options of the <a href="http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/">horizontal pod autoscaler (HPA)</a>. Currently you are only able to scale based on CPU</p>]]></description><link>https://markswanderingthoughts.nl/building-your-own-horizontal-pod-autoscaler-for/</link><guid isPermaLink="false">5c13db791ffa7f0d11a388fc</guid><category><![CDATA[Scaling]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Rxjs]]></category><category><![CDATA[reactive-programming]]></category><dc:creator><![CDATA[Mark van Straten]]></dc:creator><pubDate>Fri, 12 Aug 2016 13:49:22 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>The current version of Kubernetes (1.3) is quite packed with features to have your containerized application run smoothly in production. Some features are still a bit minimal viable product like the scaling options of the <a href="http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/">horizontal pod autoscaler (HPA)</a>. Currently you are only able to scale based on CPU and Memory consumption (custom scale metrics are in <a href="https://github.com/kubernetes/kubernetes/blob/release-1.3/docs/proposals/custom-metrics.md">alpha</a>).</p>
<p>One of our applications is a websocket server designed to have really long connected clients. While performance testing our application we found that the performance bottleneck of our application was around 25.000 active websocket connections before destabilizing and crashing. While running this load each pod did not have an elevated CPU load or memory pressure. Thus our need for scaling by websocket connection count was born. This blogpost describes our learnings while building our own custom Horizontal Pod Autoscaler.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="howdoestheoriginalhpaofkuberneteswork">How does the original HPA of Kubernetes work</h2>
<p>While looking at the source code of Kubernetes (<a href="https://github.com/kubernetes/kubernetes/blob/c5e82a01b15136141801ba93de810fdf3165086b/pkg/controller/podautoscaler/horizontal.go#L133"><code>computeReplicasForCPUUtilization()</code></a>) we see that the current implementation is very straightforward:</p>
<ol>
<li>Calculate the CPU utilization of all the pods</li>
<li>Calculate the amount of pods required based on the <code>targetUtilization</code></li>
<li>Scale to the calculated amount of replicas</li>
</ol>
<p>We decided we could do better. We defined the following goals for our custom HPA:</p>
<ul>
<li>Do not crash the application for current load (even if load exceeds available capacity</li>
<li>Scale up fast, overscale if needed</li>
<li>Take bootup time of new application instance in account when determining to scale</li>
<li>Scale down gradually, prevent scaling down until current load is below max capacity if scaled down</li>
</ul>
<h2 id="makingsureourapplicationdoesnotcrash">Making sure our application does not crash</h2>
<p>To prevent our application from crashing we implemented a <a href="http://kubernetes.io/docs/user-guide/pod-states/#container-probes">ReadinessProbe</a> which marks our pod as <code>NotReady</code> when it reaches the connection limit. This results in the Kubernetes load balancer no longer sending new traffic to this pod. Once the amount of connections to the pod start to fall below the connection limit it is marked as <code>Ready</code> again and starts receiving load by the Kubernetes load balancer again. This process needs to go hand in hand with the scaling of pods otherwise new request would eventually hit the load balancer with no available pods in its pool.</p>
<h2 id="fastupscaling">Fast upscaling</h2>
<p>When scaling up we want to make sure that we can handle the increased amount of connections. Thus scaling up should happen fast, overscaling if needed. Since the application needs some time to spin up we need to predict the new load we will be receiving at the time the scale operation would be completed given that we start it now and we know the history of the <code>websocketConnectionCount</code>.</p>
<p>We initially thought about using a linear prediction based on the last <em>n</em>=5 <code>websocketConnectionCount</code> values but that led to suboptimal predictions when the amount of connections is increasing or decreasing at an exponential rate. We then started using the <a href="http://npmjs.com/package/regression">npm <em>regression</em> library</a> to do <a href="https://en.wikipedia.org/wiki/Polynomial_regression#Matrix_form_and_calculation_of_estimates">second degree polynomial regression</a> to find a formula which fits the evolution of our connectionCount and then solving it to gain the prediction for the next value.</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://markswanderingthoughts.nl/content/images/2018/12/k8s-custom-hpa-scaling-up.png" class="kg-image" alt><figcaption>Dotted line is the predicted load</figcaption></figure><!--kg-card-begin: markdown--><h2 id="gradualdownscaling">Gradual downscaling</h2>
<p>When scaling down we do not scale based on predictions because that might result in scaling down pods which still are required for the current load. We also need to be more lenient when scaling down because our disconnected websockets will try to reconnect. So when we detect that the prediction from the polynomial regression is less than the previous <code>websocketConnectionCount</code> we will reduce it with 5% and use that as prediction. That way the scaling down will take pretty long and prepare us for returning connections.</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://markswanderingthoughts.nl/content/images/2018/12/k8s-custom-hpa-scaling-down.png" class="kg-image" alt><figcaption>Dotted line is the 5% reduction because prediction was lower than current load</figcaption></figure><!--kg-card-begin: markdown--><p>If over time those connections never return we are still downscaling but at a slow rate.</p>
<h2 id="executingkubernetesscaleoperations">Executing Kubernetes scale operations</h2>
<p>Because our custom HPA is running within the same Kubernetes cluster it can retrieve a service token from <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code> to access the API running on the master. Using that token we can access the API to apply a patch http request to the replicas of the deployment containing your pods, effectively scaling your application.</p>
<h2 id="mergingitallwithrxjs">Merging it all with RxJS</h2>
<p>We used <a href="https://github.com/Reactive-Extensions/RxJS">RxJS</a> so we could use functional composition over a stream of future events. This resulted in very readable code like this:</p>
<pre><code class="language-lang-javascript">const Rx = require('rx');
const credentials = getKubernetesCredentials();

Rx.Observable.interval(10 * 1000)
  .map(i =&gt; getMetricsofPods(credentials.masterUrl, credentials.token))
  .map(metrics =&gt; predictNumberOfPods(metrics, MAX_CONNECTIONS_PER_POD))
  .distinctUntilChanged(prediction =&gt; prediction)
  .map(prediction =&gt; scaleDeploymentInfiniteRetries(credentials.masterUrl, credentials.token, prediction))
  .switch()
  .subscribe(
    onNext =&gt; { },
    onError =&gt; {
      console.log(`Uncaught error: ${onError.message} ${onError.stack}`);
      process.exit(1);
    });
  // NOTE: getKubernetesCredentials(), getMetricsofPods(), predictNumberOfPods(), scaleDeploymentInfiniteRetries() left out for brevity
</code></pre>
<p>It is really elegant that we were able to use <code>map()</code> + <a href="https://github.com/Reactive-Extensions/RxJS/blob/master/doc/api/core/operators/switch.md"><code>switch()</code></a> to keep trying to scale the deployment (+ log errors) until it succeeds or when a newer scale request is initiated.</p>
<h2 id="partingthoughts">Parting thoughts</h2>
<p>Building our own HPA was a load of fun. Using the Kubernetes API is a great experience and is an example for how an API should be designed. At first we thought it would be a massive undertaking to develop our own HPA but in the end were really pleased with how the pieces came together. Using RxJS is a definite game changer when trying to describe the flow of your code without cluttering it with state management. Overall we are happy with the results and as far as we can tell our predictions are working quite nice with real connections.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Kubernetes ReadinessProbe should be mandatory]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>For my current customer i'm in the process of migrating their application from <a href="https://cloud.google.com/appengine/docs/flexible/">Google App Engine Flexible</a> (former Managed VMs) toward <a href="http://kubernetes.io/">Kubernetes</a> running on <a href="https://cloud.google.com/container-engine/">GKE</a>. During extensive tests I concluded that although pod <a href="https://markswanderingthoughts.nl/kubernetes-readinessprobe-should-be-mandatory/(http://kubernetes.io/docs/user-guide/pod-states/#container-probes)">readinessProbes</a> are optional, they should in fact be mandatory when having a <a href="http://kubernetes.io/docs/user-guide/services/">Service</a> connected.</p>
<h2 id="whatisareadinessprobeanyway">What is a</h2>]]></description><link>https://markswanderingthoughts.nl/kubernetes-readinessprobe-should-be-mandatory/</link><guid isPermaLink="false">5c13db791ffa7f0d11a388fd</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Gke]]></category><dc:creator><![CDATA[Mark van Straten]]></dc:creator><pubDate>Tue, 03 May 2016 11:37:59 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>For my current customer i'm in the process of migrating their application from <a href="https://cloud.google.com/appengine/docs/flexible/">Google App Engine Flexible</a> (former Managed VMs) toward <a href="http://kubernetes.io/">Kubernetes</a> running on <a href="https://cloud.google.com/container-engine/">GKE</a>. During extensive tests I concluded that although pod <a href="https://markswanderingthoughts.nl/kubernetes-readinessprobe-should-be-mandatory/(http://kubernetes.io/docs/user-guide/pod-states/#container-probes)">readinessProbes</a> are optional, they should in fact be mandatory when having a <a href="http://kubernetes.io/docs/user-guide/services/">Service</a> connected.</p>
<h2 id="whatisareadinessprobeanyway">What is a readinessProbe anyway?</h2>
<p>Because your application is wrapped in a Docker container, running inside a kubernetes pod, allocated on a node there needs to be a mechanism to interact with it during its lifetime. Kubernetes currently provides two types of <a href="http://kubernetes.io/docs/user-guide/liveness/">healthchecks</a> to verify if your application is working as intended, <em>LivenessProbes</em> &amp; <em>ReadinessProbes</em>. Both kinds of probes are the responsibility of the <code>Kubelet agent</code>.</p>
<ul>
<li>LivenessProbes are used determine that a pod needs to be restarted because its behavior is incorrect.</li>
<li>ReadinessProbes are used to signal to the loadbalancer that a pod is not accepting workload at the moment.</li>
</ul>
<h2 id="adayinthelifeofyourpod">A day in the life of your pod</h2>
<p>Given a simple sample application (http webserver) running as a pod in your kubernetes cluster, receiving traffic through a <a href="http://kubernetes.io/docs/user-guide/services/">Service</a>. You have more than enough machines running with room for pods to scale if needs be. And today is the big day, you get <a href="https://en.wikipedia.org/wiki/Slashdot_effect">slashdotted</a>.</p>
<p>Without a readinessprobe defined, this might be how your traffic will look like:</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://markswanderingthoughts.nl/content/images/2018/12/k8s-pod-scaling-no-readiness-probe.png" class="kg-image" alt><figcaption>Kubernetes pod scaling without using readiness checks</figcaption></figure><!--kg-card-begin: markdown--><p>This is due to the fact that your pod starts receiving load as soon as it is found to be in a <em>Running</em> state and <em>Ready</em> to process work.</p>
<p>From the <a href="http://kubernetes.io/docs/user-guide/pod-states/#container-probes">documentation</a>:</p>
<blockquote>
<p>The default state of Readiness before the initial delay is <em>Failure</em>. The state of Readiness for a container when no probe is provided is assumed to be <em>Success</em></p>
</blockquote>
<p>Combine this with the <a href="http://kubernetes.io/docs/user-guide/pod-states/#pod-phase">pod phase</a>:</p>
<blockquote>
<p><em>Running</em>: The pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.</p>
</blockquote>
<p>Nothing in this verifies that your application is done with its initial setup and ready to start processing.</p>
<h2 id="letsfixitwithreadinesschecks">Lets fix it with readiness checks</h2>
<p>Append a readinessProbe to the configuration of your pods:</p>
<pre><code class="language-language-yaml">containers:
 - name: my-wordpress-application
   image: my-wordpress-application:0.0.1
   ports:
   - containerPort: 80
   readinessProbe:
     httpGet:
       # Path to probe; should be cheap, but representative of typical behavior
       path: /index.html
       port: 80
     initialDelaySeconds: 5
     timeoutSeconds: 1
</code></pre>
<p>This probe will wait 5 seconds before verifying that the actual application running in your container is ready to serve. If it fails it will retry in 10 seconds. Only when your application starts to respond to the HTTP get request with a 200-399 result code will the pod be added to the loadbalancer.</p>
<p>And lo and behold, your scaling behaviour will now look as follows:</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://markswanderingthoughts.nl/content/images/2018/12/k8s-pod-scaling-with-readiness-probe.png" class="kg-image" alt><figcaption>Kubernetes pod scaling with readiness probe</figcaption></figure><!--kg-card-begin: markdown--><p>This leads me to believe that when your pod is receiving inputs of some sorts you should always append a readinessProbe to prevent errors during the initial bootup of your application.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Speaking at NDC Oslo 2015]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This year I got the change to give a talk at the <a href="http://www.ndcoslo.com">NDC Oslo 2015</a> about using StatsD to measure your application's performance. Giving a really long talk to a big crowd of non-native dutch attendee's was quite the feat! I had a load of fun and learned a ton.</p>]]></description><link>https://markswanderingthoughts.nl/measuring-what-mattersndc-oslo-2015/</link><guid isPermaLink="false">5c13db791ffa7f0d11a388fe</guid><category><![CDATA[Speaking]]></category><category><![CDATA[Statsd]]></category><category><![CDATA[Grafana]]></category><dc:creator><![CDATA[Mark van Straten]]></dc:creator><pubDate>Wed, 01 Jul 2015 14:22:08 GMT</pubDate><media:content url="https://markswanderingthoughts.nl/content/images/2018/12/IMG_20150619_144413.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://markswanderingthoughts.nl/content/images/2018/12/IMG_20150619_144413.jpg" alt="Speaking at NDC Oslo 2015"><p>This year I got the change to give a talk at the <a href="http://www.ndcoslo.com">NDC Oslo 2015</a> about using StatsD to measure your application's performance. Giving a really long talk to a big crowd of non-native dutch attendee's was quite the feat! I had a load of fun and learned a ton.</p>
<h2 id="thendc">The NDC</h2>
<p>The NDC (Former: Norwegian Developer Conference, Now: New Developer Conference) is an annually hosted conference originating from <a href="http://www.ndcoslo.com">Oslo</a> but now with spinoffs in <a href="http://www.ndc-london.com">London</a> and Australia next year. The focus of the conference is mainly .NET developer oriented but with the increasing popularity for F# now is also expanding the type of talks. The days are long (9:00-19:00) and the diversity in talks is huge (9+ parallel tracks!). Together with the speaker dinner first day, the party the second day it makes for a really intense three day long conference packed with fun and brilliant idea's.</p>
<p>The talk which i gave at NDC:</p>
<h2 id="knowledgeispowertheguidetomeasurewhatmatters">Knowledge is power! The guide to measure what matters</h2>
<p>How do you monitor the key performance indicators of your application? Do you know if signups are decreasing versus last week? Have you adopted agile principles but also a hard time to monitor the improvements of your continuous deployments? In this talk we will briefly discuss multiple measuring solutions before diving into the nitty-gritty details of measuring with the help of StatsD. We will implement a few counters and timers and graph these so we can start to make sense of the data. Then we will use powerful functions to analyse the data and spot trends before your users do.</p>
<p>After this talk you will be empowered to create your own data metrics with help of StatsD and have basic knowledge how to plot these metrics into meaningfull graphs. Be empowered!</p>
<p>Code examples will be in C# but technology demonstrated is not limited to this.</p>
<h2 id="slidesrecording">Slides &amp; Recording</h2>
<!--kg-card-end: markdown--><figure class="kg-card kg-embed-card"><iframe src="https://player.vimeo.com/video/131644108?app_id=122963" width="480" height="270" frameborder="0" title="Knowledge is power! The guide to measure what matters. - Mark van Straten" allow="autoplay; fullscreen" allowfullscreen></iframe></figure><!--kg-card-begin: html--><script async class="speakerdeck-embed" data-id="9fc5d2785b32424e9726b94ab5fe282d" data-ratio="1.77777777777778" src="//speakerdeck.com/assets/embed.js"></script>
<!--kg-card-end: html-->]]></content:encoded></item></channel></rss>