Irreal: Zamansky 42: Git Gutter and Git Time Machine

Mike Zamansky has another video up in his excellent Using Emacs Series. This time he looks at Git Gutter and Git Time Machine. These are a couple of small utilities that make working with Git files and repositories a bit easier.

I’ve used Git Time Machine for a while. It’s one of those things you probably aren’t going to use that often—unless you have a special use case like Zamansky—but when you want to see how a file has changed over time it’s just the thing. You can see how it works in the video.

I haven’t used the other utility, Git Gutter, but it looks interesting. What it does is mark the differences between your current file and what’s in the repository. That makes it easy to see what changes you’ve made. That can be useful, especially when your working on a file for an extended time. You can also stage or revert individual hunks of code right from the utility. Again, Zamansky demonstrates this in the video. After watching the video, I’m going to install Git Gutter and see how it fits with my work flow.

The video is about 8 and a half minutes so it will fit nicely into a coffee break.

-1:-- Zamansky 42: Git Gutter and Git Time Machine (Post jcs)--L0--C0--February 19, 2018 04:03 PM

Irreal: Emacs for Devops

Alexey Koval has a nice post on Emacs as a devops editor. It’s from a little over a year old but it’s still worth taking a look at. I’m not involved with devops but I still learned a few useful things from the post and accompanying videos.

For example, Koval shows how to debug shell scripts from Emacs using bashdb. That can be really handy when your script is more than a few lines long. Of course, you can also run the script from inside Emacs and test things as you go along. In that respect, it’s another example of interactive programming, a method that I find especially useful.

He also has a nice section on using tramp to work with remote files. I tend to do stupid things like opening an SSH session to list the files to find the name of the file I want but as Koval shows, you can just open the remote directory you’re interested in to get a dired listing from which you can pick whatever file you need. Once you’ve got a remote session going, you can even start a remote shell that reuses the same SSH connection that tramp is using.

There are 7 short videos in the post that illustrate the points Koval’s making. My only complaint about them is that he doesn’t have a key display utility so it’s sometimes hard to follow what he’s doing. Nonetheless, the videos are really useful and I learned some new tricks from them.

The post is definitely worth looking at even if you’re not involved with devops.

-1:-- Emacs for Devops (Post jcs)--L0--C0--February 18, 2018 06:00 PM

Chris Wellons: Options for Structured Data in Emacs Lisp

This article has been translated into Russian by ClipArtMag.

So your Emacs package has grown beyond a dozen or so lines of code, and the data it manages is now structured and heterogeneous. Informal plain old lists, the bread and butter of any lisp, are not longer cutting it. You really need to cleanly abstract this structure, both for your own organizational sake any for anyone reading your code.

With informal lists as structures, you might regularly ask questions like, “Was the ‘name’ slot stored in the third list element, or was it the fourth element?” A plist or alist helps with this problem, but those are better suited for informal, externally-supplied data, not for internal structures with fixed slots. Occasionally someone suggests using hash tables as structures, but Emacs Lisp’s hash tables are much too heavy for this. Hash tables are more appropriate when keys themselves are data.

Defining a data structure from scratch

Imagine a refrigerator package that manages a collection of food in a refrigerator. A food item could be structured as a plain old list, with slots at specific positions.

(defun fridge-item-create (name expiry weight)
  (list name expiry weight))

A function that computes the mean weight of a list of food items might look like this:

(defun fridge-mean-weight (items)
  (if (null items)
      0.0
    (let ((sum 0.0)
          (count 0))
      (dolist (item items (/ sum count))
        (setf count (1+ count)
              sum (+ sum (nth 2 item)))))))

Note the use of (nth 2 item) at the end, used to get the item’s weight. That magic number 2 is easy to mess up. Even worse, if lots of code accesses “weight” this way, then future extensions will be inhibited. Defining some accessor functions solves this problem.

(defsubst fridge-item-name (item)
  (nth 0 item))

(defsubst fridge-item-expiry (item)
  (nth 1 item))

(defsubst fridge-item-weight (item)
  (nth 2 item))

The defsubst defines an inline function, so there’s effectively no additional run-time costs for these accessors compared to a bare nth. Since these only cover getting slots, we should also define some setters using the built-in gv (generalized variable) package.

(require 'gv)

(gv-define-setter fridge-item-name (value item)
  `(setf (nth 0 ,item) ,value))

(gv-define-setter fridge-item-expiry (value item)
  `(setf (nth 1 ,item) ,value))

(gv-define-setter fridge-item-weight (value item)
  `(setf (nth 2 ,item) ,value))

This makes each slot setf-able. Generalized variables are great for simplifying APIs, since otherwise there would need to be an equal number of setter functions (fridge-item-set-name, etc.). With generalized variables, both are at the same entrypoint:

(setf (fridge-item-name item) "Eggs")

There are still two more significant improvements.

  1. As far as Emacs Lisp is concerned, this isn’t a real type. The type-ness of it is just a fiction created by the conventions of the package. It would be easy to make the mistake of passing an arbitrary list to these fridge-item functions, and the mistake wouldn’t be caught so long as that list has at least three items. An common solution is to add a type tag: a symbol at the beginning of the structure that identifies it.

  2. It’s still a linked list, and nth has to walk the list (i.e. O(n)) to retrieve items. It would be much more efficient to use a vector, turning this into an efficient O(1) operation.

Addressing both of these at once:

(defun fridge-item-create (name expiry weight)
  (vector 'fridge-item name expiry weight))

(defsubst fridge-item-p (object)
  (and (vectorp object)
       (= (length object) 4)
       (eq 'fridge-item (aref object 0))))

(defsubst fridge-item-name (item)
  (unless (fridge-item-p item)
    (signal 'wrong-type-argument (list 'fridge-item item)))
  (aref item 1))

(defsubst fridge-item-name--set (item value)
  (unless (fridge-item-p item)
    (signal 'wrong-type-argument (list 'fridge-item item)))
  (setf (aref item 1) value))

(gv-define-setter fridge-item-name (value item)
  `(fridge-item-name--set ,item ,value))

;; And so on for expiry and weight...

As long as fridge-mean-weight uses the fridge-item-weight accessor, it continues to work unmodified across all these changes. But, whew, that’s quite a lot of boilerplate to write and maintain for each data structure in our package! Boilerplate code generation is a perfect candidate for a macro definition. Luckily for us, Emacs already defines a macro to generate all this code: cl-defstruct.

(require 'cl)

(cl-defstruct fridge-item
  name expiry weight)

In Emacs 25 and earlier, this innocent looking definition expands into essentially all the above code. The code it generates is expressed in the most optimal form for its version of Emacs, and it exploits many of the available optimizations by using function declarations such as side-effect-free and error-free. It’s configurable, too, allowing for the exclusion of a type tag (:named) — discarding all the type checks — or using a list rather than a vector as the underlying structure (:type). As a crude form of structural inheritance, it even allows for directly embedding other structures (:include).

Two pitfalls

There a couple pitfalls, though. First, for historical reasons, the macro will define two namespace-unfriendly functions: make-NAME and copy-NAME. I always override these, preferring the -create convention for the constructor, and tossing the copier since it’s either useless or, worse, semantically wrong.

(cl-defstruct (fridge-item (:constructor fridge-item-create)
                           (:copier nil))
  name expiry weight)

If the constructor needs to be more sophisticated than just setting slots, it’s common to define a “private” constructor (double dash in the name) and wrap it with a “public” constructor that has some behavior.

(cl-defstruct (fridge-item (:constructor fridge-item--create)
                           (:copier nil))
  name expiry weight entry-time)

(cl-defun fridge-item-create (&rest args)
  (apply #'fridge-item--create :entry-time (float-time) args))

The other pitfall is related to printing. In Emacs 25 and earlier, types defined by cl-defstruct are still only types by convention. They’re really just vectors as far as Emacs Lisp is concerned. One benefit from this is that printing and reading these structures is “free” because vectors are printable. It’s trivial to serialize cl-defstruct structures out to a file. This is exactly how the Elfeed database works.

The pitfall is that once a structure has been serialized, there’s no more changing the cl-defstruct definition. It’s now a file format definition, so the slots are locked in place. Forever.

Emacs 26 throws a wrench in all this, though it’s worth it in the long run. There’s a new primitive type in Emacs 26 with its own reader syntax: records. This is similar to hash tables becoming first class in the reader in Emacs 23.2. In Emacs 26, cl-defstruct uses records instead of vectors.

;; Emacs 25:
(fridge-item-create :name "Eggs" :weight 11.1)
;; => [cl-struct-fridge-item "Eggs" nil 11.1]

;; Emacs 26:
(fridge-item-create :name "Eggs" :weight 11.1)
;; => #s(fridge-item "Eggs" nil 11.1)

So far slots are still accessed using aref, and all the type checking still happens in Emacs Lisp. The only practical change is the record function is used in place of the vector function when allocating a structure. But it does pave the way for more interesting things in the future.

The major short-term downside is that this breaks printed compatibility across the Emacs 25/26 boundary. The cl-old-struct-compat-mode function can be used for some degree of backwards, but not forwards, compatibility. Emacs 26 can read and use some structures printed by Emacs 25 and earlier, but the reverse will never be true. This issue initially tripped up Emacs’ built-in packages, and when Emacs 26 is released we’ll see more of these issues arise in external packages.

Dynamic dispatch

Prior to Emacs 25, the major built-in package for dynamic dispatch — functions that specialize on the run-time type of their arguments — was EIEIO, though it only supported single dispatch (specializing on a single argument). EIEIO brought much of the Common Lisp Object System (CLOS) to Emacs Lisp, including classes and methods.

Emacs 25 introduced a more sophisticated dynamic dispatch package called cl-generic. It focuses only on dynamic dispatch and supports multiple dispatch, completely replacing the dynamic dispatch portion of EIEIO. Since cl-defstruct does inheritance and cl-generic does dynamic dispatch, there’s not really much left for EIEIO — besides bad ideas like multiple inheritance and method combination.

Without either of these packages, the most direct way to build single dispatch on top of cl-defstruct would be to shove a function in one of the slots. Then the “method” is just a wrapper that call this function.

;; Base "class"

(cl-defstruct greeter
  greeting)

(defun greet (thing)
  (funcall (greeter-greeting thing) thing))

;; Cow "class"

(cl-defstruct (cow (:include greeter)
                   (:constructor cow--create)))

(defun cow-create ()
  (cow--create :greeting (lambda (_) "Moo!")))

;; Bird "class"

(cl-defstruct (bird (:include greeter)
                    (:constructor bird--create)))

(defun bird-create ()
  (bird--create :greeting (lambda (_) "Chirp!")))

;; Usage:

(greet (cow-create))
;; => "Moo!"

(greet (bird-create))
;; => "Chirp!"

Since cl-generic is aware of the types created by cl-defstruct, functions can specialize on them as if they were native types. It’s a lot simpler to let cl-generic do all the hard work. The people reading your code will appreciate it, too:

(require 'cl-generic)

(cl-defgeneric greet (greeter))

(cl-defstruct cow)

(cl-defmethod greet ((_ cow))
  "Moo!")

(cl-defstruct bird)

(cl-defmethod greet ((_ bird))
  "Chirp!")

(greet (make-cow))
;; => "Moo!"

(greet (make-bird))
;; => "Chirp!"

The majority of the time a simple cl-defstruct will fulfill your needs, keeping in mind the gotcha with the constructor and copier names. Its use should feel almost as natural as defining functions.

-1:-- Options for Structured Data in Emacs Lisp (Post)--L0--C0--February 14, 2018 05:43 PM

sachachua: 2018-02-12 Emacs news

Links from reddit.com/r/emacs, /r/orgmode, /r/spacemacs, Hacker News, planet.emacsen.org, YouTube, the changes to the Emacs NEWS file, and emacs-devel.

-1:-- 2018-02-12 Emacs news (Post Sacha Chua)--L0--C0--February 12, 2018 11:35 PM

Marcin Borkowski: append-next-kill

Today I’d like to share a simple trick which I learned a few days ago. It is well-known that if you perform a few killing commands in a row (like C-k or M-d), only one entry is put into the kill ring. Sometimes, however, I want to kill things in different places and still combine them into one kill ring entry. Enter C-M-w, or M-x append-next-kill. It makes the next killing command append its prey to the last kill ring entry.
-1:-- append-next-kill (Post)--L0--C0--February 12, 2018 08:23 PM

Alex Schroeder: Buttery Smooth Emacs

This is the best blog post about Emacs in a long time. I’m still laughing. Buttery Smooth Emacs, Friday, October 28, 2016.

«GNU Emacs is an old-school C program emulating a 1980s Symbolics Lisp Machine emulating an old-fashioned Motif-style Xt toolkit emulating a 1970s text terminal emulating a 1960s teletype.»

«Emacs organizes its view of the outside world into frames (what the rest of the world calls “windows”), windows (which the rest of the world calls “panes”), and buffers (which the rest of the world calls “documents”).»

«Did Emacs just adapt to whatever these non-Xt toolkits did? Did Emacs adopt modern best practices? GTK+ is a modern GUI library. Emacs supports GTK+. Is Emacs a well-behaved GTK+ program now?»

«What’s particularly hilarious is that SIGIO can happen in the middle of redisplay. The REPL loop (in the Emacs case, not Read Eval Print, but Read Eval WTF) can be recursive.»

No, really. This blog post just keeps on giving.

(I don’t get why people use Facebook as their blog but whatever, this blog post is great.)

Tags:

-1:-- Buttery Smooth Emacs (Post)--L0--C0--February 12, 2018 05:55 PM

Tom Tromey: JIT Compilation for Emacs

There have been a few efforts at writing an Emacs JIT — the original one, Burton Samograd’s, and also Nick LLoyd’s. So, what else to do except write my own?

Like the latter two, I based mine on GNU libjit. I did look at a few other JIT libraries: LLVM, gcc-jit, GNU Lightning, MyJit.  libjit seemed like a nice middle ground between a JIT with heavy runtime costs (LLVM, GCC) and one that is too lightweight (Lightning).

All of these Emacs JITs work by compiling bytecode to native code.  Now, I don’t actually think that is the best choice — it’s just the easiest — but my other project to do a more complete job in this area isn’t really ready to be released.  So bytecode it is.

Emacs implements a somewhat weird stack-based bytecode.  Many ordinary things are there, but seemingly obvious stack operations like “swap” do not exist; and there are bytecodes for very specialized Emacs operations like forward-char or point-max.

Samograd describes his implementation as “compiling down the spine”.  What he means by this is that the body of each opcode is implemented by some C function, and the JIT compiler emits, essentially, a series of subroutine calls.  This used to be called “jsr threading” in the olden days, though maybe it has some newer names by now.

Of course, we can do better than this, and Lloyd’s JIT does.  His emits instructions for the bodies of most bytecodes, deferring only a few to helper functions.  This is a better approach because many of these operations are only one or two instructions.

However, his approach takes a wrong turn by deferring stack operations to the compiled code.  For example, in this JIT, the Bdiscard opcode, which simply drops some items from the stack, is implemented as:

 CASE (Bdiscard):
 {
   JIT_NEED_STACK;
   JIT_INC (ctxt.stack, -sizeof (Lisp_Object));
   JIT_NEXT;
   NEXT;
 }

It turns out, though, that this isn’t needed — at least, for the bytecode generated by the Emacs byte-compiler, the stack depth at any given PC is a constant.  This means that the stack adjustments can be done at compile time, not runtime, leading to a performance boost.  So, the above opcode doesn’t need to emit code at all.

(And, if you’re worried about hand-crafted bytecode, it’s easy to write a little bytecode verifier to avoid JIT-compiling invalid things.  Though of course you shouldn’t worry, since you can already crash Emacs with bad bytecode.)

So, naturally, my implementation does not do this extra work.  And, it inlines more operations besides.

Caveat

I’ve only enabled the JIT for bytecode that uses lexical binding.  There isn’t any problem enabling it everywhere, I just figured it probably isn’t that useful, and so I didn’t bother.

Results

The results are pretty good.  First of all, I have it set up to automatically JIT compile every function, and this doesn’t seem any slower than ordinary Emacs, and it doesn’t crash.

Using the “silly-loop” example from the Emacs Lisp manual, with lexical binding enabled, I get these results:

Mode Time
Interpreted 4.48
Byte compiled 0.91
JIT 0.26

This is essentially the best case for this JIT, though.

Future Directions

I have a few ideas for how to improve the performance of the generated code.  One way to look at this is to look at Emacs’ own C code, to see what advantages it has over JIT-compiled code.  There are really three: cheaper function calls, inlining, and unboxing.

Calling a function in Emacs Lisp is quite expensive.  A call from the JIT requires marshalling the arguments into an array, then calling Ffuncall; which then might dispatch to a C function (a “subr”), the bytecode interpreter, or the ordinary interpreter.  In some cases this may require allocation.

This overhead applies to nearly every call — but the C implementation of Emacs is free to call various primitive functions directly, without using Ffuncall to indirect through some Lisp symbol.

Now, these direct calls aren’t without a cost: they prevent the modification of some functions from Lisp.  Sometimes this is a pain (it might be handy to hack on load from Lisp), but in many cases it is unimportant.

So, one idea for the JIT is to keep a list of such functions and then emit direct calls rather than indirect ones.

Even better than this would be to improve the calling convention so that all calls are less expensive.  However, because a function can be redefined with different arguments, it is tricky to see how to do this efficiently.

In the Emacs C code, many things are inlined that still aren’t inlined in the JIT — just look through lisp.h for all the inline functions (and/or macros, lisp.h is “unusual”).  Many of these things could be done in the JIT, though in some cases it might be more work than it is worth.

Even better, but also even more difficult, would be inlining from one bytecode function into another.  High-performance JITs do this when they notice a hot spot in the code.

Finally, unboxing.  In the Emacs C code, it’s relatively normal to type-check Lisp objects and then work solely in terms of their C analogues after that point.  This is more efficient because it hoists the tag manipulations.  Some work like this could be done automatically, by writing optimization passes for libjit that work on libjit’s internal representation of functions.

Getting the Code

The code is on the libjit branch in my Emacs repository on github.  You’ll have to build your own libjit, too, and if you want to avoid hacking on the Emacs Makefile, you will need my fork of libjit that adds pkg-config files.

-1:-- JIT Compilation for Emacs (Post tom)--L0--C0--February 08, 2018 11:05 PM

Alex Bennée: FOSDEM 2018

I’ve just returned from a weekend in Brussels for my first ever FOSDEM – the Free and Open Source Developers, European Meeting. It’s been on my list of conferences to go to for some time and thanks to getting my talk accepted, my employer financed the cost of travel and hotels. Thanks to the support of the Université libre de Bruxelles (ULB) the event itself is free and run entirely by volunteers. As you can expect from the name they also have a strong commitment to free and open source software.

The first thing that struck me about the conference is how wide ranging it was. There were talks on everything from the internals of debugging tools to developing public policy. When I first loaded up their excellent companion app (naturally via the F-Droid repository) I was somewhat overwhelmed by the choice. As it is a free conference there is no limit on the numbers who can attend which means you are not always guarenteed to be able to get into every talk. In fact during the event I walked past many long queues for the more popular talks. In the end I ended up just bookmarking all the talks I was interested in and deciding which one to go to depending on how I felt at the time. Fortunately FOSDEM have a strong archiving policy and video most of their talks so I’ll be spending the next few weeks catching up on the ones I missed.

There now follows a non-exhaustive list of the most interesting ones I was able to see live:

Dashamir’s talk on EasyGPG dealt with the opinionated decisions it makes to try and make the use of GnuPG more intuitive to those not versed in the full gory details of public key cryptography. Although I use GPG mainly for signing GIT pull requests I really should make better use it over all. The split-key solution to backups was particularly interesting. I suspect I’ll need a little convincing before I put part of my key in the cloud but I’ll certainly check out his scripts.

Liam’s A Circuit Less Travelled was an entertaining tour of some of the technologies and ideas from early computer history that got abandoned on the wayside. These ideas were often to be re-invented in a less superior form as engineers realised the error of their ways as technology advanced. The later half of the talk turns into a bit of LISP love-fest but as an Emacs user with an ever growing config file that is fine by me 😉

Following on in the history vein was Steven Goodwin’s talk on Digital Archaeology which was a salutatory reminder of the amount of recent history that is getting lost as computing’s breakneck pace has discarded old physical formats in lieu of newer equally short lived formats. It reminded me I should really do something about the 3 boxes of floppy disks I have under my desk. I also need to schedule a visit to the Computer History Museum with my children seeing as it is more or less on my doorstep.

There was a tongue in check preview that described the EDSAC talk as recreating “an ancient computer without any of the things that made it interesting”. This was was a little unkind. Although the project re-implemented the computation parts in a tiny little FPGA the core idea was to introduce potential students to the physicality of the early computers. After an introduction to the hoary architecture of the original EDSAC and the Wheeler Jump Mary introduced the hardware they re-imagined for the project. The first was an optical reader developed to read in paper tapes although this time ones printed on thermal receipt paper. This included an in-depth review of the problems of smoothing out analogue inputs to get reliable signals from their optical sensors which mirrors the problems the rebuild is facing with nature of the valves used in EDSAC. It is a shame they couldn’t come up with some way to involve a valve but I guess high-tension supplies and school kids don’t mix well. However they did come up with a way of re-creating the original acoustic mercury delay lines but this time with a tube of air and some 3D printed parabolic ends.

The big geek event was the much anticipated announcement of RISC-V hardware during the RISC-V enablement talk. It seemed to be an open secret the announcement was coming but it still garnered hearty applause when it finally came. I should point out I’m indirectly employed by companies with an interest in a competing architecture but it is still good to see other stuff out there. The board is fairly open but there are still some peripheral IPs which were closed which shows just how tricky getting to fully-free hardware is going to be. As I understand the RISC-V’s licensing model the ISA is open (unlike for example an ARM Architecture License) but individual companies can still have closed implementations which they license to be manufactured which is how I assume SiFive funds development. The actual CPU implementation is still very much a black box you have to take on trust.

Finally for those that are interested my talk is already online for those that are interested in what I’m currently working on. The slides have been slightly cropped in the video but if you follow the link to the HTML version you can read along on your machine.

I have to say FOSDEM’s setup is pretty impressive. Although there was a volunteer in each room to deal with fire safety and replace microphones all the recording is fully automated. There are rather fancy hand crafted wooden boxes in each room which take the feed from your laptop and mux it with the camera. I got the email from the automated system asking me to review a preview of my talk about half and hour after I gave it. It took a little longer for the final product to get encoded and online but it’s certainly the nicest system I’ve come across so far.

All in all I can heartily recommend FOSDEM for anyone in an interest is FLOSS. It’s a packed schedule and there is going to be something for everyone there. Big thanks to all the volunteers and organisers and I hope I can make it next year 😉

-1:-- FOSDEM 2018 (Post Alex)--L0--C0--February 06, 2018 09:36 AM

sachachua: 2018-02-05 Emacs news

Links from reddit.com/r/emacs, /r/orgmode, /r/spacemacs, Hacker News, planet.emacsen.org, YouTube, the changes to the Emacs NEWS file, and emacs-devel.

-1:-- 2018-02-05 Emacs news (Post Sacha Chua)--L0--C0--February 05, 2018 04:09 PM

Raimon Grau: 2 ways to anchor a regex in elisp

This one I just learnt reading a PR in the melpa repo.


Usually we use ^ and $ to match the beginning and end of the line when dealing with regular expressions.

But, the same way we have \A and \z in ruby , in elisp manual: elisp regex backslash explains there is \` and \'  (that would be written \\` and \\' inside your regex string) to anchor the regex match to the beginning and end of the string or buffer. While $ matches end of the line, so "hello$" will match "hello\ngoodbye", while "hello\\'" will not.
-1:-- 2 ways to anchor a regex in elisp (Post Raimon Grau (noreply@blogger.com))--L0--C0--January 27, 2018 01:39 PM

Manuel Uberti: Getting ready for Dutch Clojure Days

As a functional programming jock, I have a confession to make: I have never been to a Clojure conference. There was a bit of Clojure in the now defunct LambdaCon, but the talks there were not as riveting as the ones I caught on YouTube from the likes of Clojure/conj, clojuTRE and Clojure/west.

No need to sound depressing, though. Thanks to 7bridges, I will happily attend Dutch Clojure Days 2018 on April 21st. As much as my enthusiasm is hard to contain, I plan to fulfil a bunch of resolutions without losing myself in total exuberance.

Learn something new

I know the list of speakers is not ready yet, but surely something new and good is waiting for me. This is usually what happens with the talks once the conference I missed makes the videos available, therefore I am pretty confident there is going to be a lot of food for my brain.

Learn something better

As far as my Clojure projects go, there is still plenty I have to master. Transducers? Spec? Design patterns? Performance? UX? Hit me, please. The amateur in me is eager to become a Clojure programmer worth his salt.

Join the community

Last but not least, I will set aside my never-ending fight with sociability and enjoy the Clojure community for real. I’ll be in Amsterdam from Friday afternoon to Sunday evening, so you will have enough time to join me in some healthy discussions about your favourite programming language. Or Emacs, if you fancy wild topics.

-1:-- Getting ready for Dutch Clojure Days (Post)--L0--C0--January 24, 2018 12:00 AM

Mathias Dahl: Make a copy of saved files to another directory


For various reasons I needed to sync files between one folder to another as soon as a certain file was saved in the first folder. I was wondering if Emacs could do this for me, and of course it could :)

Basically, what I am using below is Emacs' `after-save-hook' together with a list of regular expressions matching files to be "synced" and the target folder to copy the files to. Each time I save a file, the list of regexps will be checked and if there is a match, the file will also be copied to the defined target directory. Neat!

It works very well so I thought of sharing it in this way. Also, it was a long time since I wrote a blog post here... :)

Put the following code in your .emacs or init.el file and then customize after-save-file-sync-regexps.

Enjoy!

;; The Code

(defcustom after-save-file-sync-regexps nil
  "A list of cons cells consisting of two strings. The `car' of
each cons cell is the regular expression matching the file(s)
that should be copied, and the `cdr' is the target directory."
  :group 'files
  :type '(repeat (cons string string)))

(defcustom after-save-file-sync-ask-if-overwrite nil
  "Ask the user before overwriting the destination file.
When set to a non-`nil' value, the user will be asked. When
`nil', the file will be copied without asking"
  :group 'files
  :type 'boolean)

(defun after-save-file-sync ()
  "Sync the current file if it matches one of the regexps.

This function will match each regexp in
`after-save-file-sync-regexps' against the current file name. If
there is a match, the current file will be copied to the
configured target directory.

If the file already exist target directory, the option
`after-save-file-sync-ask-if-overwrite' will control if the file
should be written automatically or if the user should be
presented with a question.

In theory, the same file can be copied to multiple target
directories, by configuring multiple regexps that match the same
file."

  (dolist (file-regexp after-save-file-sync-regexps)
    (when (string-match (car file-regexp) (buffer-file-name))
      (let ((directory (file-name-as-directory (cdr file-regexp))))
        (copy-file (buffer-file-name) directory (if after-save-file-sync-ask-if-overwrite 1 t))
        (message "Copied file to %s" directory)))))

(add-hook 'after-save-hook 'after-save-file-sync)

;; The End



-1:-- Make a copy of saved files to another directory (Post Mathias Dahl (noreply@blogger.com))--L0--C0--January 23, 2018 08:19 PM

Marcin Borkowski: Info-edit

There was an interesting (and sometimes quite amusing) discussion about Info on the help-gnu-emacs mailing list. Apart from learning that there exist people who do not even want to use the amazing Info mode, I learned that there is a very little known Emacs command Info-edit. It used to be bound to e in Info, but has been deprecated some time ago and now the only way to invoke it seems to be by M-: (Info-edit). It puts the browsed Info buffer into an editing mode; after the edit, you can press C-c C-c and be asked about where to save the edited file.
-1:-- Info-edit (Post)--L0--C0--January 22, 2018 04:31 AM

Chris Wellons: Debugging Emacs or: How I Learned to Stop Worrying and Love DTrace

For some time Elfeed was experiencing a strange, spurious failure. Every so often users were seeing an error (spoiler warning) when updating feeds: “error in process sentinel: Search failed.” If you use Elfeed, you might have even seen this yourself. From the surface it appeared that curl, tasked with the responsibility for downloading feed data, was producing incomplete output despite reporting a successful run. Since the run was successful, Elfeed assumed certain data was in curl’s output buffer, but, since it wasn’t, it failed hard.

Unfortunately this issue was not reproducible. Manually running curl outside of Emacs never revealed any issues. Asking Elfeed to retry fetching the feeds would work fine. The issue would only randomly rear its head when Elfeed was fetching many feeds in parallel, under stress. By the time the error was discovered, the curl process had exited and vital debugging information was lost. Considering that this was likely to be a bug in Emacs itself, there really wasn’t a reliable way to capture the necessary debugging information from within Emacs Lisp. And, indeed, this later proved to be the case.

A quick-and-dirty work around is to use condition-case to catch and swallow the error. When the bizarre issue shows up, rather than fail badly in front of the user, Elfeed could attempt to swallow the error — assuming it can be reliably detected — and treat the fetch as simply a failure. That didn’t sit comfortably with me. Elfeed had done its due diligence checking for errors already. Someone was lying to Elfeed, and I intended to catch them with their pants on fire. Someday.

I’d just need to witness the bug on one of my own machines. Elfeed is part of my daily routine, so surely I’d have to experience this issue myself someday. My plan was, should that day come, to run a modified Elfeed, instrumented to capture extra data. I would have also routinely run Emacs under GDB so that I could inspect the failure more deeply.

For now I just had to wait to hunt that zebra.

Bryan Cantrill, DTrace, and FreeBSD

Over the holidays I re-discovered Bryan Cantrill, a systems software engineer who worked for Sun between 1996 and 2010, and is most well known for DTrace. My first exposure to him was in a BSD Now interview in 2015. I had re-watched that interview and decided there was a lot more I had to learn from him. He’s become a personal hero to me. So I scoured the internet for more of his writing and talks. Besides what I’ve already linked in this article, here are a couple more great presentations:

You can also find some of his writing scattered around the DTrace blog.

Some interesting operating system technology came out of Sun during its final 15 or so years — most notably DTrace and ZFS — and Bryan speaks about it passionately. Almost as a matter of luck, most of it survived the Oracle acquisition thanks to Sun releasing it as open source in just the nick of time. Otherwise it would have been lost forever. The scattered ex-Sun employees, still passionate about their prior work at Sun, along with some of their old customers have since picked up the pieces and kept going as a community under the name illumos. It’s like an open source flotilla.

Naturally I wanted to get my hands on this stuff to try it out for myself. Is it really as good as they say? Normally I stick to Linux, but it (generally) doesn’t have these Sun technologies. The main reason is license incompatibility. Sun released its code under the CDDL, which is incompatible with the GPL. Ubuntu does infamously include ZFS, but other distributions are unwilling to take that risk. Porting DTrace is a serious undertaking since it’s got its fingers throughout the kernel, which also makes the licensing issues even more complicated.

(Update Feburary 2018: DTrace has been released under the GPLv2, allowing it to be legally integrated with Linux.)

Linux has a reputation for Not Invented Here (NIH) syndrome, and these licensing issues certainly contribute to that. Rather than adopt ZFS and DTrace, they’ve been reinvented from scratch: btrfs instead of ZFS, and a slew of partial options instead of DTrace. Normally I’m most interested in system call tracing, and my go to is strace, though it certainly has its limitations — including this situation of debugging curl under Emacs. Another famous example of NIH is Linux’s epoll(2), which is a broken version of BSD kqueue(2).

So, if I want to try these for myself, I’ll need to install a different operating system. I’ve dabbled with OmniOS, an OS built on illumos, in virtual machines, using it as an alien environment to test some of my software (e.g. enchive). OmniOS has a philosophy called Keep Your Software To Yourself (KYSTY), which is really just code for “we don’t do packaging.” Honestly, you can’t blame them since they’re a tiny community. The best solution to this is probably pkgsrc, which is essentially a universal packaging system. Otherwise you’re on your own.

There’s also openindiana, which is a more friendly desktop-oriented illumos distribution. Still, the short of it is that you’re very much on your own when things don’t work. The situation is like running Linux a couple decades ago, when it was still difficult to do.

If you’re interested in trying DTrace, the easiest option these days is probably FreeBSD. It’s got a big, active community, thorough documentation, and a huge selection of packages. Its license (the BSD license, duh) is compatible with the CDDL, so both ZFS and DTrace have been ported to FreeBSD.

What is DTrace?

I’ve done all this talking but haven’t yet described what DTrace really is. I won’t pretend to write my own tutorial, but I’ll provide enough information to follow along. DTrace is a tracing framework for debugging production systems in real time, both for the kernel and for applications. The “production systems” part means it’s stable and safe — using DTrace won’t put your system at risk of crashing or damaging data. The “real time” part means it has little impact on performance. You can use DTrace on live, active systems with little impact. Both of these core design principles are vital for troubleshooting those really tricky bugs that only show up in production.

There are DTrace probes scattered all throughout the system: on system calls, scheduler events, networking events, process events, signals, virtual memory events, etc. Using a specialized language called D (unrelated to the general purpose programming language D), you can dynamically add behavior at these instrumentation points. Generally the behavior is to capture information, but it can also manipulate the event being traced.

Each probe is fully identified by a 4-tuple delimited by colons: provider, module, function, and probe name. An empty element denotes a sort of wildcard. For example, syscall::open:entry is a probe at the beginning (i.e. “entry”) of open(2). syscall:::entry matches all system call entry probes.

Unlike strace on Linux which monitors a specific process, DTrace applies to the entire system when active. To run curl under strace from Emacs, I’d have to modify Emacs’ behavior to do so. With DTrace I can instrument every curl process without making a single change to Emacs, and with negligible impact to Emacs. That’s a big deal.

So, when it comes to this Elfeed issue, FreeBSD is much better poised for debugging the problem. All I have to do is catch it in the act. However, it’s been months since that bug report and I’m not really making this connection yet. I’m just hoping I eventually find an interesting problem where I can apply DTrace.

FreeBSD on a Raspberry Pi 2

So I’ve settled in FreeBSD as the playground for these technologies, I just have to decide where. I could always run it in a virtual machine, but it’s always more interesting to try things out on real hardware. FreeBSD supports the Raspberry Pi 2 as a Tier 2 system, and I had a Raspberry Pi 2 sitting around collecting dust, so I put it to use.

I wrote the image to an SD card, and for a few days I stretched my legs on this new system. I cloned a couple dozen of my own git repositories, ran the builds and the tests, and just got a feel for things. I tried out the ports system for the first time, mainly to discover that the low-powered Raspberry Pi 2 takes days to build some of the packages I want to try.

I mostly program in Vim these days, so it’s some days before I even set up Emacs. Eventually I do build Emacs, clone my configuration, fire it up, and give Elfeed a spin.

And that’s when the “search failed” bug strikes! Not just once, but dozens of times. Perfect! This low-powered platform is the jackpot for this particular bug, triggering it left and right. Given that I’ve got DTrace at my disposal, it’s the perfect place to debug this. Something is lying to Elfeed and DTrace will play the judge.

Before I dive in I see three possibilities:

  1. curl is reporting success but truncating its output.
  2. Emacs is quietly truncating curl’s output.
  3. Emacs is misinterpreting curl’s exit status.

With Dtrace I can observe what every curl process writes to Emacs, and I can also double check curl’s exit status. I come up with the following (newbie) DTrace script:

syscall::write:entry
/execname == "curl"/
{
    printf("%d WRITE %d \"%s\"\n",
           pid, arg2, stringof(copyin(arg1, arg2)));
}

syscall::exit:entry
/execname == "curl"/
{
    printf("%d EXIT  %d\n", pid, arg0);
}

The /execname == "curl"/ is a predicate that (obviously) causes the behavior to only fire for curl processes. The first probe has DTrace print a line for every write(2) from curl. arg0, arg1, and arg2 correspond to the arguments of write(2): fd, buf, count. It logs the process ID (pid) of the write, the length of the write, and the actual contents written. Remember that these curl processes are run in parallel by Emacs, so the pid allows me to associate the separate writes and the exit status.

The second probe prints the pid and the exit status (the first argument to exit(2)).

I also want to compare this to exactly what is delivered to Elfeed when curl exits, so I modify the process sentinel — the callback that handles a subprocess exiting — to call write-file before any action is taken. I can compare these buffer dumps to the logs produced by DTrace.

There are two important findings.

First, when the “search failed” bug occurs, the buffer was completely empty (95% of the time) or truncated at the end of the HTTP headers (5% of the time), right at the blank line. DTrace indicates that curl did its job to the full, so it’s Emacs who’s the liar. It’s not delivering all of curl’s data to Elfeed. That’s pretty annoying.

Second, curl was line-buffered. Each line was a separate, independent write(2). I was certainly not expecting this. Normally the C library only does line buffering when the output is a terminal. That’s because it’s guessing a user may be watching, expecting the output to arrive a line at a time.

Here’s a sample of what it looked like in the log:

88188 WRITE 32 "Server: Apache/2.4.18 (Ubuntu)
"
88188 WRITE 46 "Location: https://blog.plover.com/index.atom
"
88188 WRITE 21 "Content-Length: 299
"
88188 WRITE 45 "Content-Type: text/html; charset=iso-8859-1
"
88188 WRITE 2 "
"

Why would curl think Emacs is a terminal?

Oh. That’s right. This is the same problem I ran into four years ago when writing EmacSQL. By default Emacs connects to subprocesses through a psuedo-terminal (pty). I called this a mistake in Emacs back then, and I still stand by that claim. The pty causes weird, annoying problems for little benefit:

  • Interpreting control characters. Hope you weren’t transferring binary data!
  • Subprocesses will generally get line buffered. This makes them slower, though in some situations it might be desirable.
  • Stdout and stderr get mixed together. (Optional since Emacs 25.)
  • New! There’s a bug somewhere in Emacs that causes truncation when ptys are used heavily in parallel.

Just from eyeballing the DTrace log I knew what to do: dump the pty and switch to a pipe. This is controlled with the process-connection-type variable, and fixing it is a one-liner.

Not only did this completely resolve the truncation issue, Elfeed is noticeably faster at fetching feeds on all machines. It’s no longer receiving mountains of XML one line at a time, like sucking pudding through a straw. It’s now quite zippy even on my Raspberry Pi 2, which had never been the case before (without the “search failed” bug). Even if you were never affected by this bug, you will benefit from the fix.

I haven’t officially reported this as an Emacs bug yet because reproducibility is still an issue. It needs something better than “fire off a bunch of HTTP requests across the internet in parallel from a Raspberry Pi.”

The fix reminds me of that old boilermaker story about charging a lot of money just to swing a hammer. Once the problem arose, DTrace quickly helped to identify the place to hit Emacs with the hammer.

Finally, a big thanks to alphapapa for originally taking the time to report this bug months ago.

-1:-- Debugging Emacs or: How I Learned to Stop Worrying and Love DTrace (Post)--L0--C0--January 17, 2018 11:59 PM

Alex Bennée: Edit with Emacs v1.15 released

After a bit of hiatus there was enough of a flurry of patches to make it worth pushing out a new release. I’m in a little bit of a quandary with what to do with this package now. It’s obviously a useful extension for a good number of people but I notice the slowly growing number of issues which I’m not making much progress on. It’s hard to find time to debug and fix things when it’s main state is Works For Me. There is also competition from the Atomic Chrome extension (and it’s related emacs extension). It’s an excellent package and has the advantage of a Chrome extension that is more actively developed and using a bi-directional web-socket to communicate with the edit server. It’s been a feature I’ve wanted to add to Edit with Emacs for a while but my re-factoring efforts are slowed down by the fact that Javascript is not a language I’m fluent in and finding a long enough period of spare time is hard with a family. I guess this is a roundabout way of saying that realistically this package is in maintenance mode and you shouldn’t expect to see any new development for the time being. I’ll of course try my best to address reproducible bugs and process pull requests in a timely manner. That said please enjoy v1.15:

Extension

* Now builds for Firefox using WebExtension hooks
* Use chrome.notifications instead of webkitNotifications
* Use

with style instead of inline for edit button
* fake “input” event to stop active page components overwriting text area

edit-server.el

* avoid calling make-frame-on-display for TTY setups (#103/#132/#133)
* restore edit-server-default-major-mode if auto-mode lookup fails
* delete window when done editing with no new frame

Get the latest from the Chrome Webstore.

-1:-- Edit with Emacs v1.15 released (Post Alex)--L0--C0--January 17, 2018 04:47 PM

Phil Hagelberg: in which the cost of structured data is reduced

Last year I got the wonderful opportunity to attend RacketCon as it was hosted only 30 minutes away from my home. The two-day conference had a number of great talks on the first day, but what really impressed me was the fact that the entire second day was spent focusing on contribution. The day started out with a few 15- to 20-minute talks about how to contribute to a specific codebase (including that of Racket itself), and after that people just split off into groups focused around specific codebases. Each table had maintainers helping guide other folks towards how to work with the codebase and construct effective patch submissions.

lensmen chronicles

I came away from the conference with a great sense of appreciation for how friendly and welcoming the Racket community is, and how great Racket is as a swiss-army-knife type tool for quick tasks. (Not that it's unsuitable for large projects, but I don't have the opportunity to start any new large projects very frequently.)

The other day I wanted to generate colored maps of the world by categorizing countries interactively, and Racket seemed like it would fit the bill nicely. The job is simple: show an image of the world with one country selected; when a key is pressed, categorize that country, then show the map again with all categorized countries colored, and continue with the next country selected.


I have yet to see a language/framework more accessible and straightforward out of the box for drawing1. Here's the entry point which sets up state and then constructs a canvas that handles key input and display:

(define (main path)
  (let ([frame (new frame% [label "World color"])]
        [categorizations (box '())]
        [doc (call-with-input-file path read-xml/document)])
    (new (class canvas%
           (define/override (on-char event)
             (handle-key this categorizations (send event get-key-code)))
           (super-new))
         [parent frame]
         [paint-callback (draw doc categorizations)])
    (send frame show #t)))

While the class system is not one of my favorite things about Racket (most newer code seems to avoid it in favor of generic interfaces in the rare case that polymorphism is truly called for), the fact that classes can be constructed in a light-weight, anonymous way makes it much less onerous than it could be. This code sets up all mutable state in a box which you use in the way you'd use a ref in ML or Clojure: a mutable wrapper around an immutable data structure.

The world map I'm using is an SVG of the Robinson projection from Wikipedia. If you look closely there's a call to bind doc that calls call-with-input-file with read-xml/document which loads up the whole map file's SVG; just about as easily as you could ask for.

The data you get back from read-xml/document is in fact a document struct, which contains an element struct containing attribute structs and lists of more element structs. All very sensible, but maybe not what you would expect in other dynamic languages like Clojure or Lua where free-form maps reign supreme. Racket really wants structure to be known up-front when possible, which is one of the things that help it produce helpful error messages when things go wrong.

Here's how we handle keyboard input; we're displaying a map with one country highlighted, and key here tells us what the user pressed to categorize the highlighted country. If that key is in the categories hash then we put it into categorizations.

(define categories #hash((select . "eeeeff")
                         (#\1 . "993322")
                         (#\2 . "229911")
                         (#\3 . "ABCD31")
                         (#\4 . "91FF55")
                         (#\5 . "2439DF")))

(define (handle-key canvas categorizations key)
  (cond [(equal? #\backspace key) ; undo
         (swap! categorizations cdr)]
        [(member key (dict-keys categories)) ; categorize
         (swap! categorizations (curry cons key))]
        [(equal? #\space key) ; print state
         (display (unbox categorizations))])
  (send canvas refresh))

Finally once we have a list of categorizations, we need to apply it to the map document and display. We apply a fold reduction over the XML document struct and the list of country categorizations (plus 'select for the country that's selected to be categorized next) to get back a "modified" document struct where the proper elements have the style attributes applied for the given categorization, then we turn it into an image and hand it to draw-pict:

(define (update original-doc categorizations)
  (for/fold ([doc original-doc])
            ([category (cons 'select (unbox categorizations))]
             [n (in-range (length (unbox categorizations)) 0 -1)])
    (set-style doc n (style-for category))))

(define ((draw doc categorizations) _ context)
  (let* ([newdoc (update doc categorizations)]
         [xml (call-with-output-string (curry write-xml newdoc))])
    (draw-pict (call-with-input-string xml svg-port->pict) context 0 0)))

The problem is in that pesky set-style function. All it has to do is reach deep down into the document struct to find the nth path element (the one associated with a given country), and change its 'style attribute. It ought to be a simple task. Unfortunately this function ends up being anything but simple:

;; you don't need to understand this; just grasp how huge/awkward it is
(define (set-style doc n new-style)
  (let* ([root (document-element doc)]
         [g (list-ref (element-content root) 8)]
         [paths (element-content g)]
         [path (first (drop (filter element? paths) n))]
         [path-num (list-index (curry eq? path) paths)]
         [style-index (list-index (lambda (x) (eq? 'style (attribute-name x)))
                                  (element-attributes path))]
         [attr (list-ref (element-attributes path) style-index)]
         [new-attr (make-attribute (source-start attr)
                                   (source-stop attr)
                                   (attribute-name attr)
                                   new-style)]
         [new-path (make-element (source-start path)
                                 (source-stop path)
                                 (element-name path)
                                 (list-set (element-attributes path)
                                           style-index new-attr)
                                 (element-content path))]
         [new-g (make-element (source-start g)
                              (source-stop g)
                              (element-name g)
                              (element-attributes g)
                              (list-set paths path-num new-path))]
         [root-contents (list-set (element-content root) 8 new-g)])
    (make-document (document-prolog doc)
                   (make-element (source-start root)
                                 (source-stop root)
                                 (element-name root)
                                 (element-attributes root)
                                 root-contents)
                   (document-misc doc))))

The reason for this is that while structs are immutable, they don't support functional updates. Whenever you're working with immutable data structures, you want to be able to say "give me a new version of this data, but with field x replaced by the value of (f (lookup x))". Racket can do this with dictionaries but not with structs2. If you want a modified version you have to create a fresh one3.


first lensman

When I brought this up in the #racket channel on Freenode, I was helpfully pointed to the 3rd-party Lens library. Lenses are a general-purpose way of composing arbitrarily nested lookups and updates. Unfortunately at this time there's a flaw preventing them from working with xml structs, so it seemed I was out of luck.

But then I was pointed to X-expressions as an alternative to structs. The xml->xexpr function turns the structs into a deeply-nested list tree with symbols and strings in it. The tag is the first item in the list, followed by an associative list of attributes, then the element's children. While this gives you fewer up-front guarantees about the structure of the data, it does work around the lens issue.

For this to work, we need to compose a new lens based on the "path" we want to use to drill down into the nth country and its style attribute. The lens-compose function lets us do that. Note that the order here might be backwards from what you'd expect; it works deepest-first (the way compose works for functions). Also note that defining one lens gives us the ability to both get nested values (with lens-view) and update them.

(define (style-lens n)
  (lens-compose (dict-ref-lens 'style)
                second-lens
                (list-ref-lens (add1 (* n 2)))
                (list-ref-lens 10)))

Our <path> XML elements are under the 10th item of the root xexpr, (hence the list-ref-lens with 10) and they are interspersed with whitespace, so we have to double n to find the <path> we want. The second-lens call gets us to that element's attribute alist, and dict-ref-lens lets us zoom in on the 'style key out of that alist.

Once we have our lens, it's just a matter of replacing set-style with a call to lens-set in our update function we had above, and then we're off:

(define (update doc categorizations)
  (for/fold ([d doc])
            ([category (cons 'select (unbox categorizations))]
             [n (in-range (length (unbox categorizations)) 0 -1)])
    (lens-set (style-lens n) d (list (style-for category)))))
second stage lensman

Often times the trade-off between freeform maps/hashes vs structured data feels like one of convenience vs long-term maintainability. While it's unfortunate that they can't be used with the xml structs4, lenses provide a way to get the best of both worlds, at least in some situations.

The final version of the code clocks in at 51 lines and is is available on GitLab.


[1] The LÖVE framework is the closest thing, but it doesn't have the same support for images as a first-class data type that works in the repl.

[2] If you're defining your own structs, you can make them implement the dictionary interface, but with the xml library we have to use the struct definitions provided us.

[3] Technically you can use the struct-copy function, but it's not that much better. The field names must be provided at compile-time, and it's no more efficient as it copies the entire contents instead of sharing internal structure. And it still doesn't have an API that allows you to express the new value as a function of the old value.

[4] Lenses work with most regular structs as long as they are transparent and don't use subtyping. Subtyping and opaque structs are generally considered bad form in modern Racket, but you do find older libraries that use them from time to time.

-1:-- in which the cost of structured data is reduced (Post Phil Hagelberg)--L0--C0--January 12, 2018 07:53 PM

Phil Hagelberg: 185

Last year I got the wonderful opportunity to attend RacketCon as it was hosted only 30 minutes away from my home. The two-day conference had a number of great talks on the first day, but what really impressed me was the fact that the entire second day was spent focusing on contribution. The day started out with a few 15- to 20-minute talks about how to contribute to a specific codebase (including that of Racket itself), and after that people just split off into groups focused around specific codebases. Each table had maintainers helping guide other folks towards how to work with the codebase and construct effective patch submissions.

lensmen chronicles

I came away from the conference with a great sense of appreciation for how friendly and welcoming the Racket community is, and how great Racket is as a swiss-army-knife type tool for quick tasks. (Not that it's unsuitable for large projects, but I don't have the opportunity to start any new large projects very frequently.)

The other day I wanted to generate colored maps of the world by categorizing countries interactively, and Racket seemed like it would fit the bill nicely. The job is simple: show an image of the world with one country selected; when a key is pressed, categorize that country, then show the map again with all categorized countries colored, and continue with the next country selected.

GUIs and XML

I have yet to see a language/framework more accessible and straightforward out of the box for drawing1. Here's the entry point which sets up state and then constructs a canvas that handles key input and display:

(define (main path)
  (let ([frame (new frame% [label "World color"])]
        [categorizations (box '())]
        [doc (call-with-input-file path read-xml/document)])
    (new (class canvas%
           (define/override (on-char event)
             (handle-key this categorizations (send event get-key-code)))
           (super-new))
         [parent frame]
         [paint-callback (draw doc categorizations)])
    (send frame show #t)))

While the class system is not one of my favorite things about Racket (most newer code seems to avoid it in favor of generic interfaces in the rare case that polymorphism is truly called for), the fact that classes can be constructed in a light-weight, anonymous way makes it much less onerous than it could be. This code sets up all mutable state in a box which you use in the way you'd use a ref in ML or Clojure: a mutable wrapper around an immutable data structure.

The world map I'm using is an SVG of the Robinson projection from Wikipedia. If you look closely there's a call to bind doc that calls call-with-input-file with read-xml/document which loads up the whole map file's SVG; just about as easily as you could ask for.

The data you get back from read-xml/document is in fact a document struct, which contains an element struct containing attribute structs and lists of more element structs. All very sensible, but maybe not what you would expect in other dynamic languages like Clojure or Lua where free-form maps reign supreme. Racket really wants structure to be known up-front when possible, which is one of the things that help it produce helpful error messages when things go wrong.

Here's how we handle keyboard input; we're displaying a map with one country highlighted, and key here tells us what the user pressed to categorize the highlighted country. If that key is in the categories hash then we put it into categorizations.

(define categories #hash((select . "eeeeff")
                         (#\1 . "993322")
                         (#\2 . "229911")
                         (#\3 . "ABCD31")
                         (#\4 . "91FF55")
                         (#\5 . "2439DF")))

(define (handle-key canvas categorizations key)
  (cond [(equal? #\backspace key) ; undo
         (swap! categorizations cdr)]
        [(member key (dict-keys categories)) ; categorize
         (swap! categorizations (curry cons key))]
        [(equal? #\space key) ; print state
         (display (unbox categorizations))])
  (send canvas refresh))

Nested updates: the bad parts

Finally once we have a list of categorizations, we need to apply it to the map document and display. We apply a fold reduction over the XML document struct and the list of country categorizations (plus 'select for the country that's selected to be categorized next) to get back a "modified" document struct where the proper elements have the style attributes applied for the given categorization, then we turn it into an image and hand it to draw-pict:

(define (update original-doc categorizations)
  (for/fold ([doc original-doc])
            ([category (cons 'select (unbox categorizations))]
             [n (in-range (length (unbox categorizations)) 0 -1)])
    (set-style doc n (style-for category))))

(define ((draw doc categorizations) _ context)
  (let* ([newdoc (update doc categorizations)]
         [xml (call-with-output-string (curry write-xml newdoc))])
    (draw-pict (call-with-input-string xml svg-port->pict) context 0 0)))

The problem is in that pesky set-style function. All it has to do is reach deep down into the document struct to find the nth path element (the one associated with a given country), and change its 'style attribute. It ought to be a simple task. Unfortunately this function ends up being anything but simple:

;; you don't need to understand this; just grasp how huge/awkward it is
(define (set-style doc n new-style)
  (let* ([root (document-element doc)]
         [g (list-ref (element-content root) 8)]
         [paths (element-content g)]
         [path (first (drop (filter element? paths) n))]
         [path-num (list-index (curry eq? path) paths)]
         [style-index (list-index (lambda (x) (eq? 'style (attribute-name x)))
                                  (element-attributes path))]
         [attr (list-ref (element-attributes path) style-index)]
         [new-attr (make-attribute (source-start attr)
                                   (source-stop attr)
                                   (attribute-name attr)
                                   new-style)]
         [new-path (make-element (source-start path)
                                 (source-stop path)
                                 (element-name path)
                                 (list-set (element-attributes path)
                                           style-index new-attr)
                                 (element-content path))]
         [new-g (make-element (source-start g)
                              (source-stop g)
                              (element-name g)
                              (element-attributes g)
                              (list-set paths path-num new-path))]
         [root-contents (list-set (element-content root) 8 new-g)])
    (make-document (document-prolog doc)
                   (make-element (source-start root)
                                 (source-stop root)
                                 (element-name root)
                                 (element-attributes root)
                                 root-contents)
                   (document-misc doc))))

The reason for this is that while structs are immutable, they don't support functional updates. Whenever you're working with immutable data structures, you want to be able to say "give me a new version of this data, but with field x replaced by the value of (f (lookup x))". Racket can do this with dictionaries but not with structs2. If you want a modified version you have to create a fresh one3.

Lenses to the rescue?

first lensman

When I brought this up in the #racket channel on Freenode, I was helpfully pointed to the 3rd-party Lens library. Lenses are a general-purpose way of composing arbitrarily nested lookups and updates. Unfortunately at this time there's a flaw preventing them from working with xml structs, so it seemed I was out of luck.

But then I was pointed to X-expressions as an alternative to structs. The xml->xexpr function turns the structs into a deeply-nested list tree with symbols and strings in it. The tag is the first item in the list, followed by an associative list of attributes, then the element's children. While this gives you fewer up-front guarantees about the structure of the data, it does work around the lens issue.

For this to work, we need to compose a new lens based on the "path" we want to use to drill down into the nth country and its style attribute. The lens-compose function lets us do that. Note that the order here might be backwards from what you'd expect; it works deepest-first (the way compose works for functions). Also note that defining one lens gives us the ability to both get nested values (with lens-view) and update them.

(define (style-lens n)
  (lens-compose (dict-ref-lens 'style)
                second-lens
                (list-ref-lens (add1 (* n 2)))
                (list-ref-lens 10)))

Our <path> XML elements are under the 10th item of the root xexpr, (hence the list-ref-lens with 10) and they are interspersed with whitespace, so we have to double n to find the <path> we want. The second-lens call gets us to that element's attribute alist, and dict-ref-lens lets us zoom in on the 'style key out of that alist.

Once we have our lens, it's just a matter of replacing set-style with a call to lens-set in our update function we had above, and then we're off:

(define (update doc categorizations)
  (for/fold ([d doc])
            ([category (cons 'select (unbox categorizations))]
             [n (in-range (length (unbox categorizations)) 0 -1)])
    (lens-set (style-lens n) d (list (style-for category)))))
second stage lensman

Often times the trade-off between freeform maps/hashes vs structured data feels like one of convenience vs long-term maintainability. While it's unfortunate that they can't be used with the xml structs4, lenses provide a way to get the best of both worlds, at least in some situations.

The final version of the code clocks in at 51 lines and is is available on GitLab.


[1] The LÖVE framework is the closest thing, but it doesn't have the same support for images as a first-class data type that works in the repl.

[2] If you're defining your own structs, you can make them implement the dictionary interface, but with the xml library we have to use the struct definitions provided us.

[3] Technically you can use the struct-copy function, but it's not that much better. The field names must be provided at compile-time, and it's no more efficient as it copies the entire contents instead of sharing internal structure. And it still doesn't have an API that allows you to express the new value as a function of the old value.

[4] Lenses work with most regular structs as long as they are transparent and don't use subtyping. Subtyping and opaque structs are generally considered bad form in modern Racket, but you do find older libraries that use them from time to time.

-1:-- 185 (Post Phil Hagelberg)--L0--C0--January 12, 2018 07:53 PM

Timo Geusch: Emacs within Emacs within Emacs…

A quick follow-up to my last post where I was experimenting with running emacsclient from an ansi-term running in the main Emacs. Interestingly, you can run Emacs in text mode within an ansi-term, just not emacsclient: Yes, the whole thing Read More

The post Emacs within Emacs within Emacs… appeared first on The Lone C++ Coder's Blog.

-1:-- Emacs within Emacs within Emacs… (Post Timo Geusch)--L0--C0--January 10, 2018 05:14 AM

emacspeak: Updating Voxin TTS Server To Avoid A Possible ALSA Bug

Updating Voxin TTS Server To Avoid A Possible ALSA Bug

1 Summary

I recently updated to a new Linux laptop running the latest Debian
(Rodete). The upgrade went smoothly, but when I started using the
machine, I found that the Emacspeak TTS server for Voxin (Outloud)
crashed consistently; here, consistently equated to crashing on short
utterances which made typing or navigating by character an extremely
frustrating experience.


I fixed the issue by creating a work-around in the TTS server
atcleci.cpp::xrun
— if you run into this issue, make sure to update and rebuild
atcleci.so from GitHub; alternatively, you'll find an updated
atcleci.so in the servers/linux-outloud/lib/ directory after a
git update that you can copy over to your servers/linux-outloud
directory.


2 What Was Crashing

I use a DMIX plugin as the default device — and have many ALSA
virtual devices that are defined in terms of this device — see my
asoundrc. With this configuration, writing to the ALSA device was
raising an EPIPE error — normally this error indicates a buffer
underrun — that's when ALSA is starved of audio data. But in many
of these cases, the ALSA device was still in a RUNNING rather than
an XRUN state — this caused the Emacspeak server to
abort. Curiously, this happened only sporadically — and from my
experimentation only happened when there were multiple streams of
audio active on the machine.
A few Google searches showed threads on the alsa/kernel devel lists
that indicated that this bug was present in the case of DMIX devices
— it was hard to tell if the patch that was submitted on the
alsa-devel list had made it into my installation of Debian.


3 Fixing The Problem

My original implementation of function xrun had been cloned from
aplay.c about 15+ years ago — looking at the newest aplay
implementation, little to nothing had changed there. I finally worked
around the issue by adding a call to

snd_pcm_prepare(AHandle) 

whenever ALSA raised an EPIPE error during write — with the ALSA
device state in a RUNNING rather than an XRUN state. This
appears to fix the issue.

-1:-- Updating Voxin TTS Server To  Avoid A Possible ALSA Bug (Post T. V. Raman (noreply@blogger.com))--L0--C0--January 08, 2018 06:06 PM

Rubén Berenguel: 2017: Year in Review

I’m trying to make these posts a tradition (even if a few days late). I thought 2016 had been a really weird and fun year, but 2017 has beaten it easily. And I only hope 2018 will be even better in every way. For the record, when I say we, it means Laia and me unless explicitly changed.

Beware, some of the links are affiliate links. I only recommend what I have and like though, get at your own risk :)

Work

Everything work related has gone up. More work, better work, more interesting work. Good, isn’t it?

As far as my consulting job in London, the most relevant parts would be:

  • Lead a rewrite and refactor of the adserver (Golang) to improve speed and reliability.
  • Migrated a batch job from Apache Pig to Apache Spark to be able to cope with larger amounts of data from third parties (now we process 2x the data with 1/10th of the cost).
  • Planned an upgrade of our Kafka cluster from Kafka 0.8.2 to Kafka 0.10.1, which we could not execute as well as planned because the cluster went down. Helped save that day together with the director of engineering when that happened.
  • Was part of the hiring team, we’ve had one successful hire this year (passed probation, is an excellent team member and loves weird tech). Hopefully we enlarge our team much more in the coming year.
  • Put a real time service in Akka in production, serving and evaluating models generated by a Spark batch job.
We also moved offices, now we have a free barista “on premises”. Free, good quality coffee is the best that can be done to improve my productivity.

In April I got new business cards (designed by Laia, you can get your own design if you want, contact her):


I kept on helping a company with its SEO efforts, and as usual patience works. Search traffic has improved 30% year-to-year, so I’m pretty happy with it. Let’s see what the new year brings.

I became technical advisor of a local startup (an old friend, PhD in maths is a founder and works there as data scientist/engineer/whatever), trying to bring data insights to small and medium retailers. I help them with technology decisions where I have more hand-to-hand experience, or know where the industry is moving.

Life

Traveling up and down as usual (2-3 weeks in London, then l’Arboç, then maybe somewhere else…) sprinkled with some conferences and holidays.

Regarding life, the universe and everything, what I’ve done and where I’ve been
  • In February we visited Hay-on-Wye again, for my birthday
  • In March I convinced Holden Karau (was easy: she loves talking about Spark :D) to be one of our great keynote speakers at PyData Barcelona 2017
  • In late March we visited Edinburgh and Glasgow
  • In early May I attended PyData London to be able to prepare better for ours. Met some great people there.
  • A bit later in May I visited Lisbon for LX Scala, thanks Jorge and the rest for the great work
  • And at the end of May, we held PyData Barcelona 2017, where I was one of the organisers. We had more than 300 attendees, enjoying a lot of interesting talks. Thanks to all attendees and the rest of the organising committee... We made a hell of a great conference
  • Mid-June, I gave my first meetup presentation, Snakes and Ladders (about typing in Python as compared with Scala) in the PyBCN meetup
  • In late June, we visited Cheddar and Wells
  • In September I visited Penrith for the awesome (thanks Jon) Scala World 2017. Looking forward to the 2019 edition.
  • In early October we visited San Sebastian for the Python San Sebastian 2017 conference. We ate terribly well there (we can recommend Bodegón Alejandro as one of the best places to eat anywhere in the world now)
  • Mid-October we visited Bletchley Park. Nice.
  • In late October we (Ernest Fontich and myself) submitted our paper Normal forms and Sternberg conjugation theorems for infinite dimensional coupled map lattice. Now we need to wait.
  • In November we visited Brussels (Ghent and Brugge too), and took an unofficial tour of the European Council with a friend who works there.
  • In December I attended for the second time Scala Exchange, and the extra community day (excellent tutorials by Heiko Seeberger and Travis Brown). Was even better than last year (maybe because I knew more people?) and I already got my tickets for next year.
  • In December we attended a wine and cheese pairing (with Francesc, our man in Brussels, and Laia) at Parés Baltà. They follow biodynamic principles (no herbicides, as natural as they can get, etc) and offer added sulfite free wines, too. They are excellent: neither Laia nor I drink, and we bought 4 bottles of their wines and cavas.
Last year I decided to start contributing to open source software this year, and I managed to become a contributor to the following projects:

I wanted to contribute to the Go compiler code base, but didn’t find an interesting issue. Maybe this year.

Learning


This year I didn’t push courses/learning as strongly as last year... Or at least this is what I thought before writing this post.

  • In August I took Apache Kafka Series - Learn Apache Kafka for Beginners, with the rest of the courses in the series waiting for me having more time available.
  • In September I tried to learn knitting and lace, but it does not seem to suit me.
  • In September I enrolled in a weekly Taichi and Qi Gong course by Mei Quan Tai Chi. Will repeat for the next term
  • In December I started learning about Cardistry

Reading

I have read slightly less than last year (36 books vs 44 last year), and the main victim has been fiction. Haven’t read much, and the best... has been the re-read of Zelazny’s Chronicles of Amber. Still excellent. I have enlarged my collection of Zelazny books, now I have more than 30.

As far as non-fiction goes, I have specially enjoyed:
  • Essentialism: given how many things I do at once, this book felt quite refreshing
  • Rich dad, poor dad: Nothing too fancy, just common sense. Invest on having assets (money-generating items) instead of liabilities (money-sucking items, like the house you live in)
  • 10% Entrepreneur: Links very well with the above. Being a 10% entrepreneur is a natural way to invest in your assets.
  • The Checklist Manifesto: Checklists are a way to automate your life. I have read several books around this concept (“creating and tweaking systems”, as a concept) and it resonates with me. If I can automate (even if I’m the machine), it’s a neat win.
  • The Subtle Art of Not Giving a F*ck: Recommended read. For no particular reason. I’ve heard that the audiobook version is great, too.

Music/events


This year I have listened mostly to Sonata Arctica. We attended their concert in Glasgow (March) and it was awesome, they are really good live. This was a build up for KISS at the O2 in London (May) which was totally terrific. And followed by Bat Out of Hell (opening day!) in London. It was great, and probably the closest I’ll ever be to listening Meat Loaf live. Lately I’ve been listening to a very short playlist I have by Loquillo, and also Anachronist.

We have also attended a performance by Penn and Teller (excellent), and IIRC we have also watched just one screening: The Last Jedi (meh, but Laia liked it).

Gadgets

This year I have gotten hold of a lot of gadgets. I mention only the terribly useful or interesting
  • From last year, iPhone 7 “small”. Not happy with it. Battery life sucks big time, I got a external Mophie battery for it.
  • Mid-year: Apple Watch Series 2. Pretty cool, and more useful than I expected.
  • Late this year: AirPods. THEY ARE AWESOME
  • Laptop foldable cooling support. While taking the deep learning course my Air got very hot, and I needed some way to get it as cool as possible.
  • Nutribullet. My morning driver is banana, Kit Kat chunky, milk, golden flax seed, guarana.
  • Icebreaker merino underwear. I sweat a bit, and get easily chaffed on the side of my legs (where it contacts my underwear). Not any more: not only is wool better at sweat-handling, but the fabric also feels better on the skin. And not, does not feel hot in the summer.
  • Double Edge Shaving. I hated shaving (and actually just kept my beard trimmed so it was never a real beard or a clean shave...) and this razor (not this one specifically, safety razors are pretty much all the same) has changed that. Now I shave regularly and enjoy it a lot (together with this soap and this after shave balm)
  • Chilly bottles. They work really well to keep drinks cold or hot. I’ll be getting their food container soon.
  • Plenty of lightning cables. You can never have enough of these. I also got this great multi-device charger, ideal for traveling.
  • Compact wallet. I’ve been shown the ads so many times I finally moved from my Tyvek wallets to one from Bellroy. It is very good.
  • Book darts. Small bookmarks that don’t get lost, look great and can double as line markers. Also, they don’t add bulk to a book, so you can have many in the same book without damaging it at all. They are great, I’m getting a second tin in my next Amazon order of stuff.
  • Two frames from an artist I saw showcased in our previous office (they had exhibits downstairs). Blue Plaque Doors and Hatchard’s, by Luke Adam Hawker.
On the fun side, I also have a spiral didgeridoo, a proper Scottish bagpipes, a Lego Mindstorms I have not played with yet :( and an Arduboy. Oh, and a Raspberry Pi Zero Wireless.


-1:-- 2017: Year in Review (Post Rubén Berenguel (noreply@blogger.com))--L0--C0--January 06, 2018 02:31 PM

Wilfred Hughes: The Emacs Guru Guide to Key Bindings

Imagine that you hold Control and type your name into Emacs. Can you describe what will happen?

– The ‘Emacs Guru Test’

Emacs shortcuts (known as ‘key bindings’) can seem ridiculous to beginners. Some Emacs users even argue you should change them as soon as you start using Emacs.

They are wrong. In this post, I’ll describe the logic behind the Emacs key bindings. Not only will you be closer to passing the guru test, but you might even find you like some of the defaults!

There Are How Many?

Emacs has a ton of key bindings.

ELISP> (length global-map)
143

Emacs is a modal editor, so most key bindings are mode-specific. However, my current Emacs instance has well over a hundred global shortcuts that work everywhere.

(Keymaps are nested data structures, so this actually undercounts! For example, C-h C-h and C-h f are not counted separately.)

Even that is a drop in the bucket compared with how many commands we could define key bindings for.

ELISP> (let ((total 0))
(mapatoms
(lambda (sym)
(when (commandp sym)
(setq total (1+ total)))))
total)
8612

How can we possibly organise all these commands?

Mnemonic Key Bindings

Basic commands are often given key bindings based on their name. You’ll encounter all of these important commands in the Emacs tutorial.

Command Key Binding
eXecute-extended-command M-x
Next-line C-n
Previous-line C-p
Forward-char C-f
Backward-car C-b
iSearch-forward C-s

Mnemonics are a really effective way of memorising things. If you can remember the name of the command, you can probably remember the key binding too.

Organised Key Bindings

Many Emacs movement commands are laid out in a consistent pattern.

For example, movement by certain amount:

Command Key Binding
forward-char C-f
forward-word M-f
forward-sexp C-M-f

Moving to the end of something:

Command Key Binding
move-end-of-line C-e
forward-sentence M-e
end-of-defun C-M-e

Transposing, which swaps text either side of the cursor:

Command Key Binding
transpose-chars C-t
transpose-words M-t
transpose-sexps C-M-t

Killing text:

Command Key Binding
kill-line C-k
kill-sentence M-k
kill-sexp C-M-k

Have you spotted the pattern?

The pattern here is that C-whatever commands are usually small, dumb text operations. M-whatever commands are larger, and usually operate on words.

C-M-whatever commands are slightly magical. These commands understand the code they’re looking at, and operate on whole expressions. Emacs uses the term ‘sexp’ (s-expression), but these commands usually work in any programming language!

Discovering Key Bindings

What happens when you press C-a? Emacs can tell you. C-h k C-a will show you exactly what command is run.

If you use a command without its key binding, Emacs will helpfully remind you there’s a shortcut available.

You can even do this backwards! If Emacs has done something neat or unexpected, you might wonder what command ran. C-h l will reveal what the command was, and exactly which keys triggered it.

Room For Emacs

Why are Emacs key bindings different from conventional shortcuts? Why doesn’t C-c copy text to the clipboard, like many other programs?

Emacs uses mnemonics for its clipboard commands: you ‘kill’ and ‘yank’ text, so the key bindings are are C-k and C-y. If you really want, you can use cua-mode so C-x acts as you expect.

The problem is that Emacs commands are too versatile, too general to fit in the usual C-x, C-c, C-v. Emacs has four clipboard commands:

  1. kill: remove text and insert it into the kill-ring. This is like clipboard cut, but you can do it multiple times and Emacs will remember every item in your clipboard.
  2. kill-ring-save: copy the selected text into the kill-ring. This is like clipboard copy, but you can also do this multiple times.
  3. yank: insert text from the kill-ring. This is like clipboard paste.
  4. yank-pop: replace the previously yanked text with the next item in the kill ring. There is no equivalent in a single-item clipboard!

The generality of Emacs means that it’s hard to find a key binding for everything. Key bindings tend to be slightly longer as a result: opening a file is C-x C-f, an additional keystroke over the C-o of other programs.

Room For You

With all these key bindings already defined, what bindings should you use for your personal favourite commands?

Much like IP addresses 192.168.x.x is reserved for private use, Emacs has keys that are reserved for user configuration. All the sequences C-c LETTER, such as C-c a, are reserved for your usage, as are <F5> through to <F9>.

For example, if you find yourself using imenu a lot, you might bind C-c i:

(global-set-key (kbd "C-c i") #'imenu)

You Make The Rules

This doesn’t mean that you should never modify key bindings. Emacsers create weird and wonderful ways of mapping keys all the time.

Emacs will even try to accommodate this. If you open the tutorial after changing a basic key binding, it will update accordingly!

The secret to mastering Emacs is to remember everything is self-documenting. Learn the help commands to find out which commands have default key bindings. Consider following the existing patterns when you define new key bindings or override existing ones. org-mode, for example, redefines C-M-t to transpose org elements.

Once you understand the patterns, you’ll know when to follow and when to break them. You’ll also be much closer to passing that guru test!

-1:-- The Emacs Guru Guide to Key Bindings (Post Wilfred Hughes (me@wilfred.me.uk))--L0--C0--January 06, 2018 12:00 AM

Alex Schroeder: Gopher Mode

Yeah, I’ve been working on Gopher stuff over the holidays.

  1. a Gopher server wrapper around Oddmuse wiki (and this site is running it, see gopher://alexschroeder.ch)
  2. a proposal of a new item type to write to a Gopher server with examples based on netcat, i.e. nc
  3. improvements to the Emacs Gopher client with support for HTML and the new item type (see this branch on GitHub)

Isn’t that amazing.

Tags:

-1:-- Gopher Mode (Post)--L0--C0--January 03, 2018 08:05 AM

Emacs Redux: A Crazy Productivity Boost: Remapping Return to Control (2017 Edition)

Back in 2013 I wrote about my favourite productivity boost in Emacs, namely remapping Return to Control, which in combination with the classic remapping of CapsLock to Control makes it really easy to get a grip on Emacs’s obsession with the Control key.

In the original article I suggested to OS X (now macOS) users the tool KeyRemap4MacBook, which was eventually renamed to Karabiner. Unfortunately this tool stopped working in macOS Sierra, due to some internal kernel architecture changes.

That was pretty painful for me as it meant that on my old MacBook I couldn’t upgrade to the newest macOS editions and on my new MacBook I couldn’t type properly in Emacs (as it came with Sierra pre-installed)… Bummer!

Fortunately 2 years later this is finally solved - the Karabiner team rewrote Karabiner from scratch for newer macOS releases and recently added my dream feature to the new Karabiner Elements. Unlike in the past though, this remapping is not actually bundled with Karabiner by default, so you have to download and enable it manually from here.

That’s actually even better than what I had originally suggested, as here it’s also suggested to use CapsLock with a dual purpose as well - Control when held down and Escape otherwise. I have no idea how this never came to my mind, but it’s truly epic! A crazy productivity boost just got even crazier!

Enjoy!

-1:-- A Crazy Productivity Boost: Remapping Return to Control (2017 Edition) (Post)--L0--C0--December 31, 2017 09:22 AM

Emacs Redux: Into to CIDER

CIDER is a popular Clojure programming environment for Emacs.

In a nutshell - CIDER extends Emacs with support for interactive programming in Clojure. The features are centered around cider-mode, an Emacs minor-mode that complements clojure-mode. While clojure-mode supports editing Clojure source files, cider-mode adds support for interacting with a running Clojure process for compilation, debugging, definition and documentation lookup, running tests and so on.

You can safely think of CIDER as SLIME (a legendary Common Lisp programming environment) for Clojure - after all SLIME was the principle inspiration for CIDER to begin with. If you’re interested in some historical background you can check out my talk on the subject The Evolution of the Emacs tooling for Clojure.

Many people who are new to Lisps (and Emacs) really struggle with the concept of “interactive programming” and are often asking what’s the easiest (and fastest) way to “grok” (understand) it.

While CIDER has an extensive manual and a section on interactive programming there, it seems for most people that’s not enough to get a clear understanding of interactive programming fundamentals and appreciate its advantages.

I always felt what CIDER needed were more video tutorials on the subject, but for one reason or another I never found the time to produce any. In the past this amazing intro to SLIME really changed my perception of SLIME and got me from 0 to 80 in like one hour. I wanted to do the same for CIDER users! And I accidentally did this in a way last year - at a FP conference I was attending to present CIDER, one of the speakers dropped out, and I was invited to fill in for them with a hands-on session on CIDER. It was officially named Deep Dive into CIDER, but probably “Intro to CIDER” would have been a more appropriate name, and it’s likely the best video introduction to CIDER around today. It’s certainly not my finest piece of work, and I definitely have to revisit the idea for proper high-quality tutorials in the future, but it’s better than nothing. I hope at least some of you would find it useful!

You might also find some of the additional CIDER resources mentioned in the manual helpful.

Enjoy!

-1:-- Into to CIDER (Post)--L0--C0--December 31, 2017 08:57 AM

(or emacs: Using digits to select company-mode candidates

I'd like to share a customization of company-mode that I've been using for a while. I refined it just recently, I'll explain below how.

Basic setting

(setq company-show-numbers t)

Now, numbers are shown next to the candidates, although they don't do anything yet:

company-numbers

Add some bindings

(let ((map company-active-map))
  (mapc
   (lambda (x)
     (define-key map (format "%d" x) 'ora-company-number))
   (number-sequence 0 9))
  (define-key map " " (lambda ()
                        (interactive)
                        (company-abort)
                        (self-insert-command 1)))
  (define-key map (kbd "<return>") nil))

Besides binding 0..9 to complete their corresponding candidate, it also un-binds RET and binds SPC to close the company popup.

Actual code

(defun ora-company-number ()
  "Forward to `company-complete-number'.

Unless the number is potentially part of the candidate.
In that case, insert the number."
  (interactive)
  (let* ((k (this-command-keys))
         (re (concat "^" company-prefix k)))
    (if (cl-find-if (lambda (s) (string-match re s))
                    company-candidates)
        (self-insert-command 1)
      (company-complete-number (string-to-number k)))))

Initially, I would just bind company-complete-number. The problem with that was that if my candidate list was ("var0" "var1" "var2"), then entering 1 means:

  • select the first candidate (i.e. "var0"), instead of:
  • insert "1", resulting in "var1", i.e. the second candidate.

My customization will now check company-candidates—the list of possible completions—for the above mentioned conflict. And if it's detected, the key pressed will be inserted instead of being used to select a candidate.

Outro

Looking at git-log, I've been using company-complete-number for at least 3 years now. It's quite useful, and now also more seamless, since I don't have to type e.g. C-q 2 any more. In any case, thanks to the author and the contributors of company-mode. Merry Christmas and happy hacking in the New Year!

-1:-- Using digits to select company-mode candidates (Post)--L0--C0--December 26, 2017 11:00 PM

Manuel Uberti: A year of functional programming

First things first: the title is a lie.

If you happen to be one of my passionate readers, you may recall I started working with Clojure on April 1. So yes, not every month of the year has been devoted to functional programming. I just needed something bold to pull you in, sorry.

Now, how does it feel having worked with Clojure for almost a year?

Here at 7bridges we had our fair share of projects. The open source ones are just a selected few: clj-odbp, a driver for OrientDB binary protocol; carter, an SPA to show how our driver works; remys, a little tool to interact with MySQL databases via REST APIs. I also had the chance to play with ArangoDB recently, and there were no problems building a sample project to understand its APIs.

At home, boodle was born to strengthen my ever-growing knowledge and do something useful for the family.

When I started in the new office, the switch from professional Java to professional Clojure was a bit overwhelming. New libraries, new tools, new patterns, new ways of solving the same old problems, new problems to approach with a totally different mindset. It all seemed too much.

Then, something clicked.

Having the same language on both client- and server-side helped me figure out the matters at hand with a set of ideas I could easily reuse. Once I understood the problem, I could look for the steps to solve it. Each step required a data structure and the function to handle this data structure. The first time I used reduce-kv because it was the most natural choice left a great smile on my face.

There is still much to learn, though. Due to lack of experience with JavaScript, my ClojureScript-fu needs to improve. I have come to appreciate unit testing, but it’s time to put this love at work on my .cljs files too. I also definitely want to know more about Clojure web applications security and performances.

2017 has been a great year to be a functional programmer. My recent liaison with Haskell is directing me more and more on my way. The functional programming way.

-1:-- A year of functional programming (Post)--L0--C0--December 21, 2017 12:00 AM

Timo Geusch: Running Emacs from inside Emacs

I’m experimenting with screen recordings at the moment and just out of curiosity decided to see if I can load and edit a text file inside the main Emacs process from inside an ansi-term using emacsclient. Spoiler alert – yes, Read More

The post Running Emacs from inside Emacs appeared first on The Lone C++ Coder's Blog.

-1:-- Running Emacs from inside Emacs (Post Timo Geusch)--L0--C0--December 14, 2017 05:44 AM

(or emacs: Comparison of transaction fees on Patreon and similar services

On December 7, Patreon made an announcement about the change in their transaction fee structure. The results as of December 10 speak for themselves:

December 2017 summary: -$29 in pledges, -6 patrons

All leaving patrons marked "I'm not happy with Patreon's features or services." as the reason for leaving, with quotes ranging from:

The billing changes are not great.

to:

Patreon's new fees are unacceptable

In this article, I will explore the currently available methods for supporting sustainable Free Software development and compare their transaction fees.

My experience

My experience taking donations is very short. I announced my fund raising campaign on Patreon in October 2017.

Here's what I collected so far, vs the actual money spent by the contributors:

  • 2017-11-01: $140.42 / $162.50 = 86.41%
  • 2017-12-01: $163.05 / $187.50 = 86.96%

The numbers here are using the old Patreon rules that are going away this month.

Real numbers

method formula charged donated fee
old Patreon ??? $1.00 $0.86 14%
new Patreon 7.9% + $0.35 $1.38 $0.95 31%
    $2.41 $1.90 21%
    $5.50 $4.75 14%
OpenCollective 12.9% + $0.30 $1.33 $0.90 32%
    $2.36 $1.80 24%
    $5.45 $4.50 18%
Flattr 16.5% $1.00 $0.84 17%
    $2.00 $1.67 17%
    $5.00 $4.18 17%
Liberapay 0.585% $1.00 $0.99 1%

On Patreon

Just like everyone else, I'm not happy with the incoming change to the Patreon fees. But even after the change, it's still a better deal than OpenCollective, which is used quite successfully e.g. by CIDER.

Just to restate the numbers in the table, if all backers give $1 (which is the majority currently, and I actually would generally prefer 5 new $1 backers over 1 new $5 backer), with the old system I get $0.86, while with the new system it's $0.69. That's more than 100% increase in transaction fees.

On OpenCollective

It's more expensive than the new Patreon fees in every category or scenario.

On Flattr

Flattr is in the same bucket as Patreon, except with slightly lower fees currently. Their default plan sounds absolutely ridiculous to me: you install a browser plug-in so that a for-profit corporation can track which websites you visit most often in order to distribute the payments you give them among those websites.

If it were a completely local tool which doesn't upload any data on the internet and instead gives you a monthly report to adjust your donations, it would have been a good enough tool. Maybe with some adjustments for mind-share bubbles, which result in prominent projects getting more rewards than they can handle, while small projects fade away into obscurity without getting a chance. But right now it's completely crazy. Still, if you don't install the plug-in, you can probably still use Flattr and it will work similarly to Patreon.

I made an account, just in case, but I wouldn't recommend going to Flattr unless you're already there, or the first impression it made on me is wrong.

On Paypal

Paypal is OK in a way, since a lot of the time the organizations like Patreon are just middle men on top of Paypal. On the other hand, there's no way to set up recurring donations. And it's harder for me to plan decisions regarding my livelihood if I don't know at least approximately the sum I'll be getting next month.

My account, in case you want to make a lump sum donation: paypal.me/aboabo.

On Bitcoin

Bitcoin is similar to Paypal, except it also:

  • has a very bad impact on the environment,
  • is a speculative bubble that supports either earning or losing money without actually providing value to the society.

I prefer to stay away from Bitcoin.

Summary

Liberapay sounds almost too good to be true. At the same time, their fees are very realistic, you could almost say optimal, since there are no fees for transfers between members. So you can spend either €20.64 (via card) or €20.12 (via bank wire) to charge €20 into your account and give me €1 per month at no further cost. If you change your mind after one month, you can withdraw your remaining €19 for free if you use a SEPA (Single Euro Payments Area) bank.

If I set out today to set up a service similar to Liberapay, even with my best intentions and the most optimistic expectations, I don't see how a better offer could be made. I recommend anyone who wants to support me to try it out. And, of course, I will report back with real numbers if anything comes out of it.

Thanks to all my patrons for their former and ongoing support. At one point we were at 30% of the monthly goal (25% atm.). This made me very excited and optimistic about the future. Although I'm doing Free Software for almost 5 years now, it's actually 3 years in academia and 2 years in industry. Right now, I'm feeling a burnout looming over the horizon, and I was really hoping to avoid it by spending less time working at for-profit corporations. Any help, either monetary or advice is appreciated. If you're a part of a Software Engineering or a Research collective that makes you feel inspired instead of exhausted in the evening and you have open positions in EU or on remote, have a look at my LinkedIn - maybe we could become colleagues in the future. I'll accept connections from anyone - if you're reading this blog, we probably have a lot in common; and it's always better together.

-1:-- Comparison of transaction fees on Patreon and similar services (Post)--L0--C0--December 09, 2017 11:00 PM

Sanel Zukan: Distraction-free EWW surfing

Sometimes when I plan to read a longish html text, I fire up EWW, a small web browser that comes with Emacs.

However, reading pages on larger monitor doesn't provide good experience, at least not for me. Here is an example:

eww-non-readable

Let's fix that with some elisp code:

(defun eww-more-readable ()
  "Makes eww more pleasant to use. Run it after eww buffer is loaded."
  (interactive)
  (setq eww-header-line-format nil)               ;; removes page title
  (setq mode-line-format nil)                     ;; removes mode-line
  (set-window-margins (get-buffer-window) 20 20)  ;; increases size of margins
  (redraw-display)                                ;; apply mode-line changes
  (eww-reload 'local))                            ;; apply eww-header changes

EWW already comes with eww-readable function, so I named it eww-more-readable.

Evaluate it and call with:

M-x eww-more-readable

Result is much better now:

eww-readable

EDIT: Chunyang Xu noticed that elisp code had balanced parentheses issue and also suggested to use (eww-reload 'local) to avoid re-fetching the page. Thanks!

-1:-- Distraction-free EWW surfing (Post)--L0--C0--November 30, 2017 11:00 PM

Pragmatic Emacs: Reorder TODO items in your org-mode agenda

I use org-mode to manage my to-do list with priorities and deadlines but inevitably I have multiple items without a specific deadline or scheduled date and that have the same priority. These appear in my agenda in the order in which they were added to my to-do list, but I’ll sometimes want to change that order. This can be done temporarily using M-UP or M-DOWN in the agenda view, but these changes are lost when the agenda is refreshed.

I came up with a two-part solution to this. The main part is a generic function to move the subtree at the current point to be the top item of all subtrees of the same level. Here is the function:

(defun bjm/org-headline-to-top ()
  "Move the current org headline to the top of its section"
  (interactive)
  ;; check if we are at the top level
  (let ((lvl (org-current-level)))
    (cond
     ;; above all headlines so nothing to do
     ((not lvl)
      (message "No headline to move"))
     ((= lvl 1)
      ;; if at top level move current tree to go above first headline
      (org-cut-subtree)
      (beginning-of-buffer)
      ;; test if point is now at the first headline and if not then
      ;; move to the first headline
      (unless (looking-at-p "*")
        (org-next-visible-heading 1))
      (org-paste-subtree))
     ((> lvl 1)
      ;; if not at top level then get position of headline level above
      ;; current section and refile to that position. Inspired by
      ;; https://gist.github.com/alphapapa/2cd1f1fc6accff01fec06946844ef5a5
      (let* ((org-reverse-note-order t)
             (pos (save-excursion
                    (outline-up-heading 1)
                    (point)))
             (filename (buffer-file-name))
             (rfloc (list nil filename nil pos)))
        (org-refile nil nil rfloc))))))

This will move any to-do item to the top of all of the items at the same level as that item. This is equivalent to putting the cursor on the headline you want to move and hitting M-UP until you reach the top of the section.

Now I want to be able to run this from the agenda-view, which is accomplished with the following function, which I then bind to the key 1 in the agenda view.

(defun bjm/org-agenda-item-to-top ()
    "Move the current agenda item to the top of the subtree in its file"
  (interactive)
  ;; save buffers to preserve agenda
  (org-save-all-org-buffers)
  ;; switch to buffer for current agenda item
  (org-agenda-switch-to)
  ;; move item to top
  (bjm/org-headline-to-top)
  ;; go back to agenda view
  (switch-to-buffer (other-buffer (current-buffer) 1))
  ;; refresh agenda
  (org-agenda-redo)
  )

  ;; bind to key 1
  (define-key org-agenda-mode-map (kbd "1") 'bjm/org-agenda-item-to-top)

Now in my agenda view, I just hit 1 on a particular item and it is moved permanently to the top of its level (with deadlines and priorities still taking precedence in the final sorting order).

-1:-- Reorder TODO items in your org-mode agenda (Post Ben Maughan)--L0--C0--November 30, 2017 09:56 PM