in which he writes about technical stuff
Wow, how time flies by ! My last blog post (before this one that is) dates from 5 years and 3 days ago. 5 years and 2 days later, here I am creating a new post to say that I’m reviving this blog after this long hiatus.
Look, I like Go. It’s a fun language to code in. It has an extensive and mostly well-thought standard library which fits in my head (I’m looking at you Java). I like the fact that I can whip out a http service serving json in just a couple of lines using only a text editor and without the need for any external library or complex framework.
In the previous article, I’ve quickly presented mow.cli, a command line parsing library I created in Go and compared it against docopt and codegangsta/cli. In this article, I’ll talk about the innards of mow.cli and how it uses finite state machines (FSM) and backtracking to handle some tricky usage strings like the cp test™ for example:
Parsing command line arguments is a deceptively complex task. It looks simple at first: Just iterate over the arguments, if it starts with a dash (-) then it is an option, otherwise it is an argument.
There are dozens of package formats: apk, deb, rpm, jar, war, whl, egg, dmg, and the list goes on. Some are language agnostic, like deb and rpm. They still are system specific, e.g. rpm have to be installed in a fedora/redhat system whereas debs go to debian/ubuntu systems 1.
In this post, I’ll show you how to setup your XCode for AVR developement (in the C programming language) using X-AVR. X-AVR was born out of my frustration with eclipse as an IDE to program for AVRs.
In the previous post, I showed how to write a recursive descent parser starting from an EBNF grammar. The grammar used as an example described a minimal expression language that could only handle numeric and boolean literals and if/else expressions.
In The previous post, I talked about some of the theory behind parsing. This post however will be about how to actually write down a parser, which given a list of tokens generated by the lexer would generate an AST.
After a teaser about the Litil language, and a (boring) chapter about the lexing component, now we finally get to the interesting parts, starting with the parsing. There’s one catch though: we’ll have to go a bit into the theory of grammars and parsing.
Earlier this morning, I received 10 of these beauties in my mailbox: Top: (Overly pessimistic) Bottom: This is a development board I designed to ease prototyping with Atmel’s tiny 85 micro-controllers. It’s a rather basic design with the tiny 85 in a DIP8 package in the center with traces to expose 5 of its digital pins (PB0 to PB4) plus ground and VCC via a header to the top.
Nous sommes des développeurs. Du moins, si vous lisez ce blog, vous l’êtes très probablement. On est des millions dans le monde. Certains d’entre nous sommes passionnés. On lit des blogs, on fait de la veille techno, on télécharge et teste les nouveaux frameworks dès leur alpha.
In this second post of the the Litil chronicle, I’m going to talk about lexing. Lexing is generally the first step in a compiler’s pipeline. What it does is transform the source code from its textual representation to a token sequence that’ll be consumed by the next stage, i.
I’m mainly using Github to publish the (tiny) projects I’m working on in my free time: mow.cli mow.cli is a sophisticated yet simple to use library to write command line applications in Go. Behind the scenes, mow.
I’ve always been fascinated by compilers and programming languages. I was fascinated by the fact that I could take control of the computer to make it do what I want. But I was even more impressed with the compilers themselves.
I just finished migrating this blog from Jekyll to Pelican. I’ve stumbled upon (no pun intended) Pelican a couple of months ago on /r/python. I thought it was cool but that was it. But recently, I’ve decided I’d be translating some Litil posts from French to English in the next weeks.
Dans ce deuxième post du conte de Litil, je vais parler de la phase de lexing. C’est généralement la première étape dans la construction d’un compilateur (ou évaluateur) d’un langage donné. Cette phase sert à transformer le texte du code source (séquence de caractères) vers une séquence de tokens, qui seront consommés par le parseur à l’étape suivante.
Comme beaucoup d’autres passionnés d’informatique, j’ai toujours été fasciné par les compilateurs et les langages de programmation, et ce depuis que j’ai appris à programmer. Mais ma fascination ne s’arrêtait pas au fait que je pouvais commander ma machine en écrivant des programmes pour produire des résultats (parfois) utiles.
This is a recipe on how to easily run multiple Python web applications using uwsgi server (in emperor mode) and behind nginx. Most existing docs and blogs would show how to manually start uwsgi to run a single app.