Initial release of gwt-phys2d (Javascript/GWT physics engine)

I’ve been exposing myself to web programming lately, and in order to teach myself how to use Google Web Toolkit (GWT), Google App Engine (GAE), the Eclipse development environment, and Javascript. Yes, it’s a Javascript physics engine — committing the occasional digital atrocity builds character, right?

You can check out a demo here: http://gwt-phys2d.appspot.com/ (library/demo source code also available for download)

It’s basically a “port” of a Java library called Phys2d. For those unfamiliar with Google Web Toolkit, it allows you to program web applications in Java (plus a decent portion of the java.util and java.io libraries), which it then compiles into Javascript. If you’re unfamiliar with programming, let me assure you that this is very weird, yet amazing, not unlike Christopher Walken performing Lady Gaga’s Poker Face.

Starting with the Phys2d code, here’s basically what the “port” involved:

  • Started a GWT/GAE project in Eclipse and imported Phys2d into it. Somewhat surprisingly, mostly due to never having used Eclipse or GWT before, this part took the largest amount of effort. Eclipse is really cool, although I’m still kind of weirded out by how many things it does automagically. I mean, if something’s wrong with your code, you can just right-click on it and have it automatically add in import statements (HUGE timesaver), declarations, etc. Freaky voodoo. I haven’t had a chance to play with it yet, but I really want to try the plugin which gives you vi functionality in Eclipse.
  • Fortunately, most of the core Phys2d library didn’t make calls to library functions which weren’t already implemented in the GWT emulation libraries.  There were a few classes which were reliant on GUI things, but I just removed these from the source.
  • On the flip side, the actual demos for Phys2d are heavily reliant on AWT (a GUI toolkit for Java). This is of course not emulated in GWT. I instead used HTML5 canvas functionality to render the shapes, via the gwt-g2d library. I also learned that apparently Chrome doesn’t support the canvas line segment drawing, while Firefox does. Hrmph. I only ported one demo over, but it should be easy to do the same for the others.
  • Also, the original demos use a constantly-running while loop to run the demo, which of course doesn’t work with the Javascript/HTML model. I instead put most of the demo program logic into a function which was called by a repeating Timer() every several milliseconds.
  • After finally getting things running (all on my MSI Wind netbook, which is coincidentally my primary development environment ;), I found that it ran godawfully slowly. GWT/Chrome’s Speed Tracer wasn’t anywhere near as helpful as I’d hoped it would be, so I instead figured out how to use Firebug. I used Firebug to find that most of my time was being spent in repeatedly-run loops for collision detection. It turned out that there were a number of “new” calls inside of these loops, creating gajillions of new temporary objects every frame, and I suspected that the occasional pauses observed while running the demo were due to garbage collecting all these objects. I’m still not sure if this is the ideal solution, but I instead created temporary objects as class members, which were reused in these loops. I did this for a couple of the functions where most of the time was being spent, and was glad to see that this improved the time spent in those functions and overall performance quite a bit.

Please feel welcome to use the library, although the source is in very much an alpha state and pretty badly organized — I basically just zipped up my source directories (I think there’s a half-hearted git repository somewhere in there) after I got things working and uploaded them. If there’s any interest, I’d be happy to clean things up and perhaps start a Google Code project. I imagine it’d be pretty cool to use in game engines and such. Since this was just a self-education exercise and I have no desire to actually write anything that takes advantage of the library, please do let me know if you’d be interested in taking over maintenance of the code!

Here’s a list of further stuff which could be done:

  • Do version tracking starting from the original phys2d sources, to moer easily determine what changes I actually made to them.
  • Port more demos
  • Fix the demo canvas drawing and add support for drawing more shapes
  • Improve performance further by getting rid of more ‘new’ operator calls. Alternatively, maybe make use of the ‘delete’ operator (exists in Javascript but not Java), but I don’t know how to smoothly do this in GWT.
  • Improve repeating timer calls so that it doesn’t keep on trying to add more frames if a computer can’t keep up with the specified timer interval.

Enjoy!

Posted in gwt-phys2d | Tagged , , | 3 Comments

Click-based visualization of the relationships between scientific fields

Many of you have likely already seen the maps of scientific fields generated based on citation information. In those visualizations, different scientific fields whose papers cite each other regularly get linked closely together on the map, and it produces a neat depiction of how different fields are related.

In a recent article on PLoS ONE by Johan Bollen et al. (original article, Nature News summary), they generate a similar visualization using click-based data instead of citations. Each “clickstream” is an anonymized sequence of user requests for research articles and generates a first-order Markov model of the clicks. For those who haven’t worked with Markov models before, a first-order model means that it calculates the probability that someone who’s clicked on an article from journal A will then click on an article from journal B, generating these probabilities for all possible journal combinations (it’s been several years for me, so my memory might be sloppy). It then applies some algorithmic foo which has the end result of arranging journals in particular fields such that those with high click-through probabilities with each other are positioned close to each other in 2D space. 

There’s some benefits/differences using clicks to generate these visualizations has compared to using citations:

  • Much more data
  • The data is more recent, and you can easily get plenty of useful data from a specific time span
  • It includes not just data from publishing researchers, but also end-users of the data, such as doctors, nurses, government officials, undergrads writing class reports, etc.
  • It tends to be much more responsive to recent trends, which can be either a good or bad thing

I’m particularly interested in seeing how these sorts of maps may change over time. For example, I suspect that a few years from now you might see economics and “brain studies” more closely related to each other. I also find it kind of curious how “brain studies” and “brain research” are on totally different parts of the map — “brain studies” is close to cognitive science, language, and nursing (?), while “brain research” is over near physiology, animal behavior, and genetics. I’d like to see what actual journals are included in the two categories.

There’s of course some privacy concerns, but it would also be neat to see how the maps would compare between diferent institutions, or even different countries. 

I do wish that they would have included computer science and engineering fields, though. I imagine this is because of the sources they used, although I imagine one could get wider-ranging results if one had access to Google Scholar’s logs (::drools::). It’d be pretty cool to generate a video showing how the map evolves over the years (although where you’d get your data set is another story), with, say, computer science starting off on a branch with mathematics and electrical engineering, and then moving to be more linked with things like physics, and then eventually dragging fields like music, neuroscience, brain research, etc. next to it. While some changes in the map may be obvious, although I imagine there may also be some surprises and sources of insight.

Posted in Visualization | 2 Comments

Winamp Halloween costume



Originally uploaded by Neil H.
Here’s the Winamp (pre-3.0) costume I wore for Halloween. I used the T-Qualizer shirt from Thinkgeek as the basis of the costume, so that it would animate in response to ambient voices and music. It was really fun dancing at LindyGroove wearing the costume, and I had quite a few people go up to me to ask me about it. There’s also a photo of the costume lighting up in the dark.

Posted in Visualization | Tagged , , | Leave a comment

Spinning dancer illusion; left brain vs. right brain hype?

Many of you have probably seen this animation of a spinning dancer silhouette from the Daily Telegraph, as it’s been making the rounds on various blogs and social networking sites. It’s a neat animation, but the blurb also states the following:

The Right Brain vs Left Brain test … do you see the dancer turning clockwise or anti-clockwise? If clockwise, then you use more of the right side of the brain and vice versa.

Personally, I can’t think of anything that would back up their source-less assertion, and a quick literature search doesn’t turn up anything either. I’ve chalked it up as a yet another misinformative popular-press write-up, but was wondering if any readers had further insight.

Posted in Demos, Illusions, Neuroscience | Tagged | 6 Comments

Video of BrainPort on Today Show

There’s a rather neat video from the Today Show featuring blind climber Erik Weihenmayer and a researcher, discussing BrainPort, a system which takes visual input from a camera and outputs as an array of tongue stimulation. Erik demonstrates recognizing some written numerals, while a video display gives an idea of what information is being output to his tongue. The tech seems nearly ready for market already, and I’m sure it’ll eventually become more portable.

A quick YouTube search brought up the following video from a few months ago from CBS News, which shows a blind tester walking around using the system and has some spiffy graphics which show how the system works:

Posted in Neuroscience, Technology and Society | Tagged , | 3 Comments