Wednesday, August 17, 2016

Programming Arduino in Visual Studio Code

I have really been enjoying using Visual Studio Code lately.  I just finished using it for a Golang backend and in place of the Intel XDK editor for a Cordova frontend (the XDK is great for building and launching Cordova apps, but I'm not a big fan of its editor).

I wouldn't say that I'm going to use it for everything (emacs is still my top choice for C++, shell scripting, LaTeX, and quick-and-dirty org-mode tables), but Visual Studio Code is pretty darn nice.  Here are a few quick reasons why I like it:

  • No wasted screen real estate.  No buttons for things that are in a menu.  No toolbars that you can screw up and never find again, like in Eclipse.  It's lean and lightweight.
  • Awesome extensions.  Really.  I installed the Go extension by lukehoban, and it felt like I was working in the preferred IDE for Go.  I installed ESLint, TSLint, and CSSLint, and they just worked perfectly.
  • Git integration.  I have never liked the git integration in any IDE I've ever used.  Visual Studio Code seems to understand the 80/20 rule... for easy git operations, it works great.  For hard things, it knows that I'm just going to use the command line.  I've actually used it as often as not in recent projects.  It just works.
  • Sane keyboard shortcuts.  I don't use a mouse if I can avoid it.  I think it goes back to when I started using Windows 3.0 instead of DOS, and constantly found my mouse performing poorly, and needing me to disassemble the bottom and clean lint off of the ball and rollers.  I learned the Windows keyboard shortcuts.  And Microsoft rarely changed them (the office ribbon being a notable exception).  When I work in emacs, the muscle memory in my fingers knows exactly how to navigate.  The same is true for well-designed windows apps.  When a program breaks the rules (like IntelliJ not respecting that Shift-F10 is supposed to open a context menu), I get so annoyed that I'll rant in the middle of a blog post!
  • Easy configuration.  Configuration files are JSON, and they have autocomplete features to help me do what I need to do.  It's not as powerful as emacs, but I haven't changed my .emacs file in years, because it seems like every time I want to change anything, I have to re-learn elisp and spend an hour digging around on the web.
So now that I've established myself as being on the Visual Studio Code bandwagon, let me show one reason why it's so great.  You can make extensions for just about any language or environment, and people do.  In my opinion, the Arduino IDE is just about the worst development tool ever (it rivals writing code in Notepad).  But in just a few easy steps, you can switch to Visual Studio Code, and never look back.

First, you'll want to install the Arduino extension (currently version 0.0.4) by moozzyk.  This gives syntax highlighting, and other simple stuff that you'd expect.

Second, you will need to set up a tasks.json file.  This will let you compile sketches, and deploy sketches to a plugged-in Arduino.  Below is my tasks.json.  It's specific to Windows (though easy to extend to OSX and Linux), and generic enough that you can put it in a .vscode folder in your main Arduino sketchbook folder, and then it will let you build and install any sketch in any subfolder:


// tasks.json for building and running Arduino sketches from
// Visual Studio Code
//
// Note: this configuration uses whatever serial port was most recently used
// by the Arduino IDE
{
    "version": "0.1.0",
    "windows": {
 "command": "c:\\galileo\\arduino_debug.exe"
    },
    "isShellCommand": true,
    "showOutput": "always",
    "suppressTaskName": true,
    "tasks": [
 {
     "taskName": "Compile",
     "args": [
  "--verify",
  "-v",
  "${file}"
     ],
     "isBuildCommand": true,
     "showOutput": "always",
     "problemMatcher": {
  "owner": "external",
  "fileLocation": [
      "relative",
      "${fileDirname}"
  ],
  "pattern": {
      "regexp": "^(.*):(\\d+):(\\d+):\\s+(warning|error):\\s+(.*)$",
      "file": 1,
      "line": 2,
      "column": 3,
      "severity": 4,
      "message": 5
  }
     }
 },
 {
     "taskName": "Run",
     "args": [
  "--upload",
  "-v",
  "${file}"
     ],
     "isTestCommand": true,
     "showOutput": "always"
 }
    ]
}

With this file in place, you can use ctrl-shift-b to run the Compile task.  You can reach the Run task by using ctrl-shift-p and then typing "task test", which will choose "Tasks: Run Test Task".

So then, at this point, you only need the Arduino IDE for two things: changing the board type/serial port, and running a serial debugger.  Short of hard-coding information in your tasks.json file whenever the board info changes, I don't know of an easy workaround for the first problem.  But for the second, it turns out that PuTTY is a fine alternative to the Arduino serial monitor.  I've been using it for a decade, and never noticed that with one click, you can switch from ssh to serial connections:


That's it!  Just click on the "Serial" radio, enter your COM port, and you have everything you need.

For the sake of completeness, here's the sketch I used.  Intel graciously donated some Galileo V2 boards and Grove IoT kits, and my son used them to display temperature readings on an LCD:




// Sketch for Galileo Gen 2 with Grove kit, for displaying temperature in
// Farenheit on LCD
//
// Temperature Sensor is connected to A0
// LCD Screen is connected to I2C #2

#include <Wire.h>
#include "rgb_lcd.h"

rgb_lcd lcd;

// background color for the LCD
const int colorR = 0;
const int colorG = 255;
const int colorB = 0;

// Pin for the temperature sensor
const int pinTemp = A0;

// Define the B-value of the thermistor in the Grove kit, so we can convert
// to Celsius
const int B = 3975;

void setup() {
    // set up the LCD's number of columns and rows, then its color
    lcd.begin(16, 2);
    lcd.setRGB(colorR, colorG, colorB);
}

void loop() {
    // Position cursor at 0,0
    lcd.setCursor(0, 0);

    // Read raw value of temperature sensor
    int val = analogRead(pinTemp);

    // Convert to Farenheit
    float resistance = (float)(1023-val)*10000/val;
    float temperature = 1/(log(resistance/10000)/B+1/298.15)-273.15;
    temperature = temperature * 9.0 / 5.0 + 32;

    // Print temperature to LCD
    lcd.print(temperature);

    // Repeat every second
    delay(1000);

}

Have you tried Visual Studio Code yet?  Should I post more of my experiences?  Leave a comment to share your thoughts.

Tuesday, June 21, 2016

Launching a Go App on Heroku

Lehigh is part of the KEEN network, an organization that promotes more entrepreneurial-minded learning in engineering curriculum.  This summer, as a KEEN project, Corey Caplan and I are designing some fun new courseware for our Software Engineering course.

Our intention is to do everything in Java within the course.  But when I need to figure out something about web backends in a hurry, I'd rather use Go.  Today was one of those times.

Without going into too much detail, I have a web app that I wanted to stop running via localhost, and start running on Heroku.  (If you're thinking this means that our Software Engineering students are going to start learning how to deploy their apps on Heroku's PaaS, you're right!).  Below is something of a recipe for how I got it to work.

Confession: this turned out to be a lot harder than I expected, and it was probably my fault.

Caveat: the recipe below is possibly a good bit more complex than it needs to be... but it works, and seems to be repeatable.

Background: I had an app that looked like this:

  • /src/admin/*.go -- a simple admin program
  • /src/appserver/*.go -- the code for the server
  • /web/... -- the entire web frontend, designed as a single-page webapp
  • /env -- a script to set the environment variable when running the app locally
  • /setgopath.sh -- a script to set the GOPATH to the root of this project
There were a few more things in the folder, like a .gitignore, but they aren't important to this discussion.

Note, too, that I like to have a different GOPATH for each GO project, instead of checking them all into the same place.  I organize my work in folders: teaching, research, etc.  Using Visual Studio Code, I can just open a bash prompt, source my setgopath.sh script, type "code &", and I've got an IDE, a shell, and everything else I need.

Dependencies: Here's the first reason why this app was interesting: it uses Google's OAuth provider for authentication, and it connects to a MongoDB instance.  There are four dependencies that I usually had to 'go get':

  • go get golang.org/x/oauth2
  • go get golang.org/x/oauth2/google
  • go get gopkg.in/mgo.v2
  • go get gopkg.in/mgo.v2/bson
And my code is in a bitbucket repository.  Let's say it's bitbucket.org/me/myapp.  When I started, I had a checkout of myapp on the desktop.  So there was a folder ~/Desktop/myapp, in which was a .git/ folder and all the stuff mentioned above.

Restructuring:  This was probably overkill, but it worked.  I started by creating a new folder on the desktop called myapp_heroku.  In it, I made a src/bitbucket/me folder, and I moved myapp/ from the Desktop to that place.  I also changed my setgopath.sh script, so that Desktop/myapp_heroku is the new GOPATH.

Note: now when I'm working on this project, I traverse all the way into the src/bitbucket.org/me/myapp folder, and I work there, but when I do a 'go install' or a 'go get', things are placed a few levels up in the directory tree.

After restructuring, I removed some cruft from the build folder.  Previously, there were bin/ and pkg/ folders in myapp... I got rid of them.  I also removed any source folders that were fetched via 'go get', because dependent files go elsewhere now.

Using godep:  Our goal in this step is to get all the code we depend on, in a manner that will ensure that Heroku grabs the same code when it builds updated versions of the app.

This is where things became un-intuitive.  Go, of course, doesn't have any built-in mechanism for managing dependencies.  Godep essentially just vendors everything into the source tree, which I don't particularly like, but it suffices.

Naturally, we need to get godep first, and add it to our path:
  • go get github.com/tools/godep
  • export PATH=$PATH:$GOPATH/bin
With that in order, we should restructure our repository ever so slightly:
  • git mv src cmd
  • git mv src/appserver src/myapp
I don't know why these steps were necessary.  But stuff really didn't work until I made both of those changes.  The Heroku docs obliquely state the first requirement, without any explanation, and the second requirement (which is that the main program you want to run should have the same name as your repository) was just a fact of how the tutorials I read all were done.  None of those tutorials had multiple executables in their projects.

(Update: I might not have needed to rename src/appserver.)

We can use godep to fetch the packages on which we depend:
  • godep get golang.org/x/oauth2
  • godep get golang.org/x/oauth2/google
  • godep get google.golang.org/appengine
  • godep get gopkg.in/mgo.v2
  • godep get gopkg.in/mgo.v2/bson
Oddly, when fetching oauth2, we get an error that appengine isn't available.  For me, doing a recursive get (godep get golang.org/x/oauth2/...) didn't work.  So I manually got one more package.

Now we can take the 'vendoring' step:
  • godep save ./...
And voila!  There's a folder called 'vendor', with all of the code we depend upon, and there's also a Godep folder.  Too bad it won't work.

The problem is that we're going to push our code to a Heroku "dyno" (think "container") and it's going to build the code.  But the mgo.v2 library's optional sasl support will be built when we push to Heroku.  That support depends on libsasl-dev being available on the host machine at build time.  The image for the Heroku dyno I'm using doesn't have libsasl-dev.  So if we were to push this repository to Heroku, it wouldn't build, and the code would be rejected.

The fix is easy: just delete the sasl folder from the vendored mgo.v2:
  • rm -rf vendor/gopkg.in/mgo.v2/internal/sasl/
Ugly, but it works.  And indeed, we're close to having everything work at this point.  To test that our vendoring is good, try to locally build the project:
  • godep go install -v bitbucket.org/me/myapp/cmd/myapp
The code should build... and it should use the vendored versions of the libraries.

Heroku Stuff: Heroku has a few more requirements that we need to satisfy.  First, we need a file called Procfile.  Its contents will just be "web: myapp".  Second, we need an app.json file.  Its contents are a bit more complex, though still straightforward:


{
  "name": "myapp",
  "description": "MyApp App Server",
  "keywords": [
    "go",
    "MyApp"
  ],
  "image": "heroku/go:1.6",
  "mount_dir": "src/bitbucket.org/me/myapp",
  "website": "https://bitbucket.org/me/myapp",
  "repository": "https://bitbucket.org/me/myapp"
}

Now we can actually create the heroku app.  I was working in Git Bash for Windows, which isn't supported by the Heroku toolbelt.  So I had to switch to the command prompt, and log in:

  • cd \Users\Me\Desktop\myapp_heroku\bitbucket.org\me\myapp
  • heroku login
  • heroku app:create myapp
At this point 'heroku local' should work.  To push to Heroku, we first 'git add' the vendor folder and all of our other recent additions, and then 'git commit'.  Then we can 'git push heroku master'.  This takes longer than a usual git push, because it doesn't finish until Heroku is done building and verifying our program.

Are We Done Yet?  Not really.  If you 'heroku run bash', you can see that bin/admin is present in the dyno, as is bin/myapp.  That's a good sign.  But our app isn't running yet.  One issue I had was that I needed to manually start the app:
  • heroku ps:scale web=1
The other issue is that we didn't yet set up the environment variables on Heroku.  We need to 'heroku config:set DBCONNECTSTRING=...' in order to let our app know how to find our cloud-hosted MongoDB instance, we need to set some OAUTH secrets, and we need to set environment variables for whatever else the app is expecting.  But that depends on the app, not on Heroku, so I'm not going to discuss it here.

Wrap-Up:  It took longer than I expected to get this to work.  Since I'll probably have to do it again, I thought it would be worth writing up the steps I took.  If this is helpful to you, too, please leave a comment and let me know.

Monday, March 28, 2016

The transaction_wrap feature in GCC

When using transactional memory, a common challenge is that functions in standard libraries cannot be called from transactions.  Sometimes, the incompatibility is unavoidable (for example, because the library does something that cannot be rolled back).  But in other cases, the implementation is not transaction safe, when other (less performant) implementations could be.

The "transaction_wrap" attribute allows you to specify that function w() should be called in place of o() whenever o() is called from within a transaction.  The syntax is not too cumbersome, but there are a few gotchas.  Here's an example to show how it all works:

// This is a quick demonstration of how to use transaction_wrap in GCC

#include <iostream>
using namespace std;

// We have a function called orig(), which we assume is implemented in a
// manner that is not transaction-safe.
int orig(int);

// This line says that there is a function called wrapper(), which is
// transaction-safe, which has the same signature as orig().  When a
// transaction calls orig(), we would like wrapper() to be called instead.
int wrapper(int) __attribute__((transaction_safe, transaction_wrap (orig)));

// Here is our original function.  It does two things:
//
// 1 - saves its operand to a volatile variable iii.  This is not
//     transaction-safe!
// 2 - adds one to its operand and returns the sum
volatile int iii;
int orig(int x) {
    iii = x;
    return x + 1;
}

// Here is our wrapper function.  It adds two to its operand and returns the
// sum.  Note that we have explicitly implemented this in a manner that
// differs from orig(), so that we can easily see which is called
int wrapper(int x) {
    return x + 2;
}

// Our driver function calls orig(1) from three contexts: nontransactional,
// atomic transaction, and relaxed transaction, and prints the result of each
// call
//
// Be warned: the behavior is not what you expect, because it depends on the
// TM algorithm that is used.  For serial-irrevocable (serialirr), the result
// is (2,2,2).  For serial, ml_wt, and gl_wt, the result is (2, 3, 3).  For
// htm, the result is (2,3,2).
int main() {
    int x = orig(1);
    cout << "orig(1) (raw) == " << x << endl;
    __transaction_atomic { x = orig(1); }
    cout << "orig(1) (atomic) == " << x << endl;
    __transaction_relaxed { x = orig(1); }
    cout << "orig(1) (relaxed) == " << x << endl;
    return 0;
}

Compile the code like this:

g++ -fgnu-tm -std=c++11 -O3 test.cc -o test

And as suggested in the comments, the output will depend on the ITM_DEFAULT_METHOD you choose:

ITM_DEFAULT_METHOD=serialirr ./test
orig(1) (raw) == 2
orig(1) (atomic) == 2
orig(1) (relaxed) == 2
ITM_DEFAULT_METHOD=serial ./test
orig(1) (raw) == 2
orig(1) (atomic) == 3
orig(1) (relaxed) == 3
ITM_DEFAULT_METHOD=ml_wt ./test
orig(1) (raw) == 2
orig(1) (atomic) == 3
orig(1) (relaxed) == 3
ITM_DEFAULT_METHOD=gl_wt ./test
orig(1) (raw) == 2
orig(1) (atomic) == 3
orig(1) (relaxed) == 3
ITM_DEFAULT_METHOD=htm ./test
orig(1) (raw) == 2
orig(1) (atomic) == 3
orig(1) (relaxed) == 2

Thursday, October 29, 2015

E-Mail Merge in Go

Mail merge is a funny thing.  Once a year, I use "mail merge" in Microsoft Office to produce envelopes that are physically mailed.  Mail merge is really good for that... you can make a single word doc that is easy to print, and then you've got all the physical documents you need, ready to be taken to a physical post office.

As a professor, there are many, many times that I need to do a mail merge that results in an email being sent.  Partly because I do a lot of work in Linux environments, and partly because of other oddities of how I like to work, I usually have a hybrid Excel-then-text workflow for this task.

The first step is to produce a spreadsheet, where each column corresponds to the content I want placed into an email.  Ultimately, I save this as a '.csv' file.  Importantly, I make sure that each column corresponds to text that requires no further edits.  If I'm sending out grades, I'll store three columns: your sum, the total, and your average.  You could imagine something like the following:
bob@phonyemail, 15, 20, 75% 
sue@phonyemail, 19, 20, 95%
...
The third step (yes, I know this sounds like Underpants Gnomes) is that I have one file per email, saved with a name corresponding to the email address, ready to be sent, and I use a quick shell script like this to send the files:


for f in *; do mutt -s "Grade Report" -c myemail@phony.net $f < $f; done


That is "for each file, where the name happens to be the same as the full email address of the recipient, send an email with the subject 'Grade Report', cc'd to me, to the person, and use the content of the corresponding file as the content of the email".

So far, so good, right?  What about phase two?  I'm pretty good with recording emacs macros on the fly, so I used to just record a macro of me turning a single line of csv into a file, and then replay that macro for each line of the csv.  It worked, it took about 10 minutes, but it was ugly and error-prone.

I recently decided to start learning Google Go (in part because one of the founders of a really cool startup called Loopd pointed out that native code performance can make a huge difference when you're doing real-time analytics on your web server).  Since I've simplified my problem tremendously (remember: the csv has text that's ready to dump straight into the final email), the Go code to make this work is pretty simple.  Unfortunately, it wasn't as simple to write as I would have hoped, because the documentation for templates is lacking.

Here's the code:


package main

import ("encoding/csv"; "flag"; "io"; "os"; "text/template")

/// Wrap an array of strings as a struct, so we can pass it to a template
type TWrap struct { Fields *[]string }

/// Parse a CSV so that each line becomes an array of strings, and then use
/// the array of strings with a template to generate one file per csv line
func main() {
 // parse command line options
 csvname := flag.String("c", "a.csv", "The csv file to parse")
 tplname := flag.String("t", "a.tpl", "The template to use")
 fnameidx := flag.Int("i", 0, "Column of csv to use as output file basename")
 fnamesuffix := flag.String("s", "out", "Output file suffix")
 flag.Parse()

 // load the text template
 tpl, err := template.ParseFiles(*tplname)
 if err != nil { panic(err) }

 // load the csv file
 file, err := os.Open(*csvname)
 if err != nil { panic(err) }
 defer file.Close()

 // parse the csv, one record at a time
 reader := csv.NewReader(file)
 reader.Comma = ','
 for {
  // get next row... exit on EOF
  row, err := reader.Read()
  if err == io.EOF { break } else if err != nil { panic(err) }
  // create output file for row
  f, err := os.Create("./" + row[*fnameidx] + "." + *fnamesuffix)
  if err != nil { panic(err) }
  defer f.Close()
  // apply template to row, dump to file
  tpl.Execute(f, TWrap{Fields:&row})
 }
}


This lets me make a "template" file, merge the csv with it, and output one file per csv row.  Furthermore, I can use a specific row of the csv to dictate the filename, and I can provide extensions (which makes the shell script above a tad trickier, but it's worth it).

The code pulls in the csv row as an array of strings.  That being the case, I can wrap the array in a struct, and then access any array entry via {{index .Fields X}}, where x is the array index.

To make it a tad more concrete, here's a sample template:


Programming Assignment #1 Grade Report

Student Name:  {{index .Fields 2}} {{index .Fields 1}}
User ID:       {{index .Fields 0}}
Overall Score: {{index .Fields 3}}/100

The script uses command line arguments, so it's 100% reusable.  Just provide the template, the csv, the column to use as the output, and the output file extension.

The code isn't really all that impressive, except that (a) it's short, and (b) it is almost as flexible as code in a scripting language, yet it runs natively.  The hardest part was finding good examples online for how to get a template to write to a file.  It's possible I'm doing it entirely wrong, but it seems to work.  If any Go expert wants to chime in and advise on how to use text templates or the csv reader in a more idiomatic way, please leave a comment.

Wednesday, April 29, 2015

Saving time with VBA and Outlook

Over the years, I've used Visual Basic for Applications in a lot of ways.  I've never really thought of myself as an expert, but I have written a fair bit of VB scripts, even though I'm mostly a Unix/C++/Java programmer.

One thing I've always appreciated about the VB community is that there is a lot of code sharing.  One script I stumbled on a while back is for downloading all attachments, from all selected files, in Outlook.

Our department scanner will send me a separate email for each file that I scan, which means that I can scan all of my students' assignments, one at a time, and have a digital copy of each.  But forwarding those on to the students is usually a pain.

Enter VBA... I used this script to download all the attachments at once.  Then I used the preview pane in Windows to quickly check that the file names were time-ordered in the same sequence as the students user IDs.  A few lines of bash later, and all 89 pdfs were mailed.  Hooray!

Tuesday, March 17, 2015

Getting Started with JUnit

I had some fun learning about JUnit recently.  I've always believed that it's important to develop incrementally.  The neat thing (to me) about unit testing is that it encourages incremental development -- if you have lots of tests that don't pass, then the natural thing to do is to pick them off, one at a time, and fix them.  In grad school, and now as a professor, I've had a fair number of occasions where someone said "I'm almost done writing it up, I should be ready to compile in a day or two".  Perhaps encouraging students to develop their tests first will discourage them from falling into that pattern of behavior.

Anyhow, I built a tutorial about JUnit, for use in my CSE398 class.  Feel free to share your thoughts on the tutorial, JUnit, and test-driven development in the comments!

Tuesday, March 3, 2015

Callbacks and Scribble Mode

Last week, we had a mobiLEHIGH tutorial session, and a student asked about how to make Fruit Ninja with LibLOL.  It turns out that getting the right behavior isn't all that hard... you can use the "scribble" feature to detect a "slice" movement on the screen, and configure the obstacles that are scribbled to have the power to defeat enemies.

There was just one problem... configuring the obstacles requires changing the LibLOL code.  I don't discourage people from changing LibLOL, but it's better to have an orthogonal way of getting the same behavior.  In this case, it's easy: let the scribble mode code take a callback, and use that callback to modify an obstacle immediately after it is created.

This is one of those changes that I can't help but love... there's less code in LibLOL, and more power is exposed to the programmer.  But it's not really any harder, and there's less "magic" going on behind the scenes now.

Here's an example of how to provide a callback to scribble mode:


    // turn on 'scribble mode'... this says "draw a purple ball that is 1.5x1.5 at the
    // location where the scribble happened, but only do it if we haven't drawn anything in
    // 10 milliseconds."  It also says "when an obstacle is drawn, do some stuff to the
    // obstacle".  If you don't want any of this functionality, you can replace the whole
    // "new LolCallback..." region of code with "null".
    Level.setScribbleMode("purpleball.png", 1.5f, 1.5f, 10, new LolCallback(){
        @Override
        public void onEvent() {
            // each time we draw an obstacle, it will be visible to this code as the
            // callback's "attached Actor".  We'll change its elasticity, make it disappear
            // after 10 seconds, and make it so that the obstacles aren't stationary
            mAttachedActor.setPhysics(0, 2, 0);
            mAttachedActor.setDisappearDelay(10, true);
            mAttachedActor.setCanFall();
        }
    });

I'm starting to think that I should redesign more of the LibLOL interfaces to use callbacks... what do you think?