Phil's Musings notes from a peripatetic programmer

Recent Posts


Embeddable and Accessible PuzzleScript Games

I ❤️ Puzzle Games.

I want kids to play games that don’t talk down to them, encourage them to figure out game mechanics on their own, and I want non-sighted people to play video games.

Introducing: the puzzlescript package.

a couple levels of the game Pot Wash Panic

Many games have puzzles sprinkled in, but I like ones that are distilled into just solving puzzles.

I first realized how engrossing these were while playing The Witness with friends. One person would control the player but everyone would be solving the puzzle in their head just by watching. We played together for days.

Then, I tried it with kids. It turns out, a 6-year-old grasped the puzzle concepts in The Witness faster than her parents and was explaining to her parents how the puzzles worked.

Then, I started looking for more and stumbled upon increpare’s awesome PuzzleScript.

Unlike other video games that tend to teach kids to memorize facts (like “Carmen San Diego”, “Oregon Trail”, or “Math Blaster”), these encourage kids to think critically, and problem-solve in groups.

Specifically, PuzzleScript games are interesting because the whole game is a text file and the levels are typically pretty small. This means it’s easy for kids to explore but it also makes the games playable by people that can’t see. The reason is that all of the sprites have human-readable names. So we can read those out instead of just showing colored pixels.

Background

Inspiration came from both the tons of easter-eggs found in software (Chrome Dino Game, various Microsoft ones and commandline tools) as well as a desire to get more people in general to play video games.

Goals

  • get kids in classrooms playing these games
  • get vision-impaired people playing these games
  • get 404 pages to have these games when something is broken (like the Chrome Dino Game). Example: this website’s 404 page
  • get people to play these instead of Sudoku to exercise their brains more

Try it out!

Games can run embedded in a webpage or in a commandline terminal. There are over 200 games to choose from!

Terminal

All you need is node 4 or higher and then run the following:

npm install --global puzzlescript
puzzlescript

See philschatz/puzzlescript for examples.

Embed in a Webpage

<body>
  <table id="the-game-ui"></table>
  <script src="https://unpkg.com/puzzlescript@3.0.0/lib/webpack-output.js"></script>
  <script>
    const gameSourceString = '...' // Source code for the game
    const table = document.querySelector('#the-game-ui')
    const engine = new PuzzleScript.TableEngine(table)
    engine.setGame(gameSourceString, 0 /*startLevel*/)
    engine.start()
  </script>
</body>

Visit the demo page to play several games (there are over 200), or see this website’s 404 page for an example of using it in a 404 page.

For Next Time

I would like to go through more details on how the code is organized. But if something does not work or is not clear, create an issue.

More Examples

Mirror Isles (original)

mirror-isles

More examples can be found in the README or see the demo page to play them in a browser.

Keep Reading...

Automatically record HTTPS requests instead of manually creating mock files

Creating mock HTTP requests seems to be the go-to method for developers.

But wouldn’t is be easier if you could just point your tests to a server and record the HTTP responses and then play them back?

Now there is fetch-vcr!

Since fetch API is in browsers, there is an easier way to do this in JavaScript!

fetch-vcr will record and play back those HTTP requests for you.

How does it work?

Just load the fetch-vcr package in your tests and it will proxy any calls to fetch(url, options).

Each HTTP response is saved to the filesystem as a cassette (also known as a fixture). The cassette contains all of the response headers and the response body.

Depending on the VCR_MODE it will do the following:

  • playback: (default) it will load the recorded cassette of the request instead of talking to the server
  • record: it will talk to the server and save the response to a local file (cassette)
  • cache: it will try to load the cassette and if the cassette is not found, it will do the same thing as record

Because it saves files to the filesystem, it works slightly differently when used in NodeJS and when used in a browser.

How does it work in NodeJS?

It’s super-simple. When running tests:

  1. Make sure your code runs var fetch = require('fetch-vcr'); instead of var fetch = require('node-fetch');
  2. Record your cassettes by running VCR_MODE=record npm test instead of just npm test
  3. Optionally, change the directory that cassettes are saved to by running fetchVCR.configure({fixturePath: '/path/to/folder/'})

Viola! you have just recorded your HTTP requests.

How does it work in a browser?

For browser tests it is a little bit more complicated because browsers do not save files to the filesystem.

Fortunately, browsers like PhantomJS or Selenium allow you to send data from the browser using alert(msg). There are other ways if you would rather do it differently.

You can use the steps listed above but will need to do the following additional steps:

  1. Pass the VCR_MODE environment variable to the browser
  2. Replace fetchVCR.saveFile(rootPath, filename, contents) => Promise with a function that calls alert(JSON.stringify([rootPath, filename, contents]))
  3. Parse the alerts and save them to disk using fs.writeFile(filePath, contents)
    • Note: This code does not run in the browser; it runs in the JS file given to PhantomJS (or Selenium if you are using Selenium)

Check out fetch-vcr for more info.

Keep Reading...

Introducing a Serverless Issue board for GitHub repositories and organizations!

At openstax.org different projects use different ticket trackers because different people have different likes and preferences. We use Trello, Pivotal, GitHub Issues, and Wunderlist. But one thing that is common across all of our open source projects is GitHub.

As we’ve grown and as our projects have started to overlap we’ve realized there is value in having a common place to look and see what’s going on in all of the projects.

image

gh-board does all the things we need and more. If it doesn’t do something, submit a Pull Request and it will!

Other ticket trackers

Any ticket tracker presents additional friction to use with GitHub:

  • it’s hidden behind a login
    • (when a ticket is linked in our IM client we don’t get a nice preview)
  • the logins are different so @mentioning people is annoying
  • you have to remember a different type of markup language
  • you have to remember to link everything twice
    • so people looking at the ticket can get to the code and vice-versa
  • you have to update the tracker when the Pull Request status changes (created, review, tested, merged, etc)
  • they don’t show the state of Pull Requests so you then have to click to see what the Pull Request status is
  • URL’s are difficult to share because frequently the state of the page is not in the URL
    • ie which milestones, columns, or other filter criteria are being used

As a developer/tester/UX: you’d still have to check multiple places to stay on top of everything (or hope that your email client doesn’t explode!)

GitHub isn’t perfect either

But GitHub Issues is not without its limitations:

  • Issues are per-repository (we have 100 repositories)
  • Milestones are per-repository
  • Labels are per-repository
  • It is difficult to add additional metadata to a ticket
  • There is no easy way to have kanban-style columns

How is this different from other GitHub-based trackers?

It has a few features that other ticket trackers lack (like huboard or waffleio):

  • open source & free!
  • you can run it anywhere!
    • you can still use vanilla GitHub (nothing to import/export and no vendor lock-in)
  • real-time collaborative editing of Issues
  • shows the state of related Issues/Pull Requests
  • shows CI status and merge conflict status
  • has charts (burndown, gantt, etc)
  • keeps track of multiple repositories from different organizations
  • and productivity-enhancing Easter Eggs!

image

CI Status and Merge Conflict

  • CI Status shows up as a green check mark or a red x on the top-right corner of a card
  • Merge conflicts are shown with a yellow warning and have a diagonal striped background

image image image

Real-time Collaborative Editing

gh-board_realtime-editing4

Issue Images

If an Issue or Pull Request contains an image then it will be shown in the Issue

image

Easter Eggs

Plus, it comes with productivity-enhancing easter eggs you can unlock!

easter-eggs

Charts

Since it stores all the open and closed tickets locally, we can generate all the fancy charts that other ticket trackers generate.

  • Burnup chart: it clearly shows when new work is added to a Milestone
  • Gantt chart: shows when milestones are due and colors the bar based on the status of all the Issues

burnup-chart

How does it work?

It:

  • uses octokat.js and polls the GitHub API for changes
  • uses the 1st repository in the list to find the column labels and milestones
    • columns are defined as labels of the form ## - Column Title or you can specify a regular expression
  • stores all the Issues and Pull Requests in the browser (thanks to IndexedDB)
    • think of it like git clone but for Issues & Pull Requests
  • searches the IndexedDB to find related issues
Keep Reading...

Introducing the atom pull-requests package!

As a programmer that uses GitHub, Pull Requests are a great way to discuss code but whenever I get feedback on code in a large codebase it is annoying to have to find where that change was. Since I use atom.io as my text editor, I decided to write a plugin that adds a great feature of GitHub (Pull Requests) into my text editor (which also happens to be written by GitHub). And viola! pull-requests.

Whenever you check out a branch that has a Pull Request and open it in GitHub, you’ll see the GitHub comment directly on the line of code inside atom. And to help you find it, the Tree view on the left will show how many comments are in the directory so you can find the file.

screenshot

Here’s a slightly out-of-date screencast showing the whole process (including installing the plugin):

process

How’s it made?

It uses the octokat.js npm package and the linter atom package. pull-requests checks if your code is in a git repository (actually, a GitHub one) and then checks if the branch corresponds to a Pull Request in the repository (or the parent repository if this is a forked repository) and pulls out the comments on the changes of a file.

Then, it uses linter to add lint messages on the lines of the files. Since linter supports HTML, pull-requests also converts the comment into HTML (complete with emojis) and adds a link to get back to the Pull Request on GitHub so you can continue discussing.

Hope that helps you!

Keep Reading...

The Death of Openstax?

OpenStax isn’t going anywhere; we have a ton of high-quality content and are revolutionizing textbook publishing for the benefit of students. But here are my thoughts on how it could be made a bit more open (and cheaper to boot!).

Problem

As background, openstax books contain a few pieces:

  1. an editor for creating a part of a book (Aloha) and organizing parts of a book into a Table of Contents
  2. rules for attribution (who authored the book)
  3. a way to convert the book into various formats (ePUB and PDF)
  4. a way to read the book online for free
  5. a way to mix-and-match and create a new book
  6. Ideally, openstax will have a way to allow others to suggest edits to a book

Solution

Non-tech-savvy

  1. Use GitHub to store the book content (book viewer, source for openstax books, and blog post)
  2. Use a browser editor to edit the book
  3. Use a little server to automatically create PDFs and ePubs (travis-ci or philschatz/pdf-ci)
  4. Support attribution automatically (example)
  5. Support “Suggested Edits” to content automatically via GitHub’s “Pull Request” (example)
  6. Support “Derived Copies” of content automatically via GitHub’s “Fork”
  7. Development of code and content using GitHub Issues via philschatz/gh-board

Tech-savvy

  1. All book content is stored directly in GitHub (see sources for examples)
  2. Replace the editor with oerpub/github-book-editor which saves directly to GitHub
  3. Replace the web view with Autogenerated Sites using GitHub (see philschatz.com/books and the various book repositories )
  4. Replace PDF (and ePub) generation with travis-ci or philschatz/pdf-ci (Every time GitHub updates, generate a new PDF)
  5. Support Derived copies using GitHub’s “Fork”
  6. Support “Suggested Edits” (which openstax used to have) via GitHub’s Pull Requests
  7. Support a diff of “Suggested Edits” via GitHub’s Markdown diff view (see blog post on books in GitHub)
  8. Support software (and content) development using something on top of GitHub Issues like huboard or ideally philschatz/gh-board (no server/subscription required)

Free Stuff!

  1. No server costs! (except PDF generation which can be minimized)
  2. autogenerated PDFs for every edition (see GitHub Tags and Releases)
  3. easy contributions from the entire world
    • travis-ci can make sure the Markdown is well-formed
  4. human-readable changes
  5. Issue tracking for content in each book
  6. Revisions for each book
  7. More reusability/remixability than you could imagine

The Kink

There is a “kink” in this process which I’d be happy to elaborate on:

  1. Each book needs to be a separate repository (cannot “easily” combine multiple books into 1)
  2. Need to learn MarkDown (specifically kramdown) unless the editor is “smart enough”

That’s how I would “kill” OpenStax with minimal effort and stay true to the openness that has helped OpenStax thrive.

For more projects check out my repositories

Keep Reading...