Disclaimer: this is not meant to be an in-depth review of any of them. I have actually spent a very limited amount of time in many of them, so a rushed calibration/setup may have lead to incorrect conclusions. Don’t hate me please 😀
A promising product showing what’s possible with current technology, albeit with lots of flaws, as expected in such an early device: screen door effect everywhere, no positional tracking, bit uncomfortable to wear, very low resolution.
Already more than 2 years old, but still benefits from the latest Oculus software updates. Low persistency screen, reduced screendoor effect and higher refresh rates makes it the first and cheapest one to be really usable on a daily basis.
Only rotational tracking. Visually on par with the modern desktop headsets. First high quality VR headset for smartphones.
Benefits from the best device drivers out there right now (async spacewarp reduces requirements down to 45FPS). One of the highest image quality when taking into account their lense design (when ignoring the god rays), and very comfortable for long sessions.
Similar overall quality as CV1, usually considered a bit less comfortable, but providing room-scale VR before anyone else did, along with proper VR controllers rather than a lousy gamepad.
Oh god, the tracking wobbles so much compared to Vive/Rift. Probably up to one centimeter at times. But it’s so cheap compared to the proper VR headsets.
Tracking uses DK2 technology, but they will soon migrate to Lighthouse. Visual quality seems on par with CV1.
Dynamic depth of field visual effect: useless. The pupils aren’t physically adapting to different depths anyway, and the effect is temporally smoothed (probably to prevent effect flickering), making the whole image feel blurry overall. Best turned off.
Foveated rendering (rendering lower quality wherever you are not directly looking at, to vastly improve framerates): works perfectly. I couldn’t notice any lag between gazing and the corresponding increase in detail. This is the future!
Visual bandwidth: unknown
Remarks: wireless with positional tracking, not going to be commercialized
The demo showed an octopus suspended in the air (water), moving up and down, and a completely flat blue background. This made it hard to judge the quality of its positional tracking.
Visual bandwidth: 7680×4320 pixels @ 90Hz = 2985 megapixels/second
Remarks: huge FOV of 180º with an 8K screen
Huge headset for a huge FOV. The demo I tried showed real time streaming video from a nearby 360º camera. I suspect the video was pretty low resolution since image quality seemed on par with a CV1.
It packs a very wide set of lenses covering most of the inside, with one fresnel lense in front of each eye (similar in appearance to those in Vive, with big segments, rather than the thin ones in CV1), while the rest was filled with what seemed a regular lense design. The visual quality didn’t suffer much in those areas, I expected a much bigger drop in detail, but nope!
The rotational tracking was lagging a good 50-200ms, probably due to the stream being compressed server side and decoded on or near the headset. Couldn’t test positional tracking, I’m actually not sure if this headset even has it or not.
Remarks: self contained wireless device, 120º FOV
Visual quality was standard, the headset was surprinsingly comfortable to wear despite its weight. The slightly bigger field of view is nice, but the tunnel effect is still noticeable.
Another selft-contained VR system with smarpthone-like hardware and rotational tracking. Normal visual quality.
The computing hardware and battery is in a gamepad controller, that is wired to the headset in order to provide the image and receive the sensor data.07.5.14
Made this at work, it’s how I would have liked to learn Git when I started toying with it some years ago.
Note that not a single Git command is explained. Instead, a series of images show how Git works, and what’s possible to do with it.
Looking up the actual (and unintuitive) commands and flags is left as an exercise to the reader 🙂
IMPORTANT: the presentation notes are only visible in the original Grokking Git presentation (bottom panel, right below each slide), but here’s an embedded version anyway, for those too lazy to click the link:02.1.14
Hello, and welcome to CPU Dealers!
In todays episode we’re going to learn how to fit an LGA771 CPU into an LGA775 motherboard with no brute force!
So why on earth would anyone decide to put an LGA771 Xeon server cpu into a domestic LGA775 motherboard, you may ask?
Welp, because it’s fun and you get to learn stuff, thats why!
Traditional motivation: Money
However, the usual argument is that, if you are planning a modest upgrade for your shitty old LGA775 system, not needing the latest and greatest, you can save some money this way. See, there’s lots of Xeon processors in the market right now, and they are all dirt cheap. The interesting thing is that most LGA775 CPUs have Xeons equivalents:
Some people argue that Intel simply bins Xeons better than the consumer counterparts, so while being essentially the same CPU, the Xeons are more reliable, run colder, and are harder, better, faster, stronger.
The LGA775 market, on the other hand, is filled with pretty expensive CPUs. They’re all usually priced 30€ to 150€ higher than the server versions: Xeons are definitely the best bang for the buck. So the plan usually is:
- Upgrade your system to a Xeon instead of a domestic CPU.
- Fun and profit!
Side motivation: Overclocking
Some people take advantage of the lower voltages required by Xeons, and choose them not only because they’re cheaper, but because it’s in theory easier to squeeze a bit more speed out of them.
Keeping that in mind, I chose an E0 stepping (later revisions usually lower power requirements of the CPU). Unfortunately, my SLBBJ unit was already running pretty hot at stock voltages and clocks, so I’m leaving it alone for the time being.
LGA775 (codenamed Socket T) was introduced by Intel around mid-2004, and used in domestic motherboards. The most popular CPUs running those LGAs are now Core2Duos and Core2Quads.
A year and a half later, in 2006, Intel introduced LGA771, a very similar LGA intended for use in multiprocessor server motherboards, and which can host Intel Xeon processors.
If we checked the socket pin assignment one by one, we could see that there’s 76 different pins in total. But most of them are irrelevant (reserved for future uses, etc), they pose no problem for our conversion mod, so we’re left wondering about the colored pins:
- Red: 8 pins only used in LGA775.
- Green: 4 pins only used in LGA771.
The red and green pins are all power pins (VCC, VSS at the top, and VTT at the bottom). There’s hundreds more of them in the LGA, so I’m sure our new CPU won’t mind if we remove just these few.
- Blue: 2 pins that have different purposes in each LGA.
These pins (L5 and M5) serve different purposes in LGA775 than in LGA771. And this time they are important pins (one of them is the Execute BIST pin, Built-In Selft Test, needed to boot). Fortunately, Intel had simply swapped their places in the newer 771 LGA! So it should be relatively easy to re-wire them.
- Yellow: not a pin, just highlighting the different shapes 🙂
These differing yellow shapes can be a problem, since the CPUs from one LGA will not physically fit in the other LGA without some hardware modifications. We’ll get to this later on.
Uh, in case you’re wondering, I did not personally go over all the pin specifications one by one. But this guy did.
In most cases, the CPU will run as-is. This was the case of my 965P-DS3 motherboard.
Sometimes, you may need to manually patch your BIOS, adding the microcode of your specific Xeon model to the internal “whitelist” (so to speak). Additionally, this usually forces your mobo to acknowledge that your Xeon CPU implements the SSE4 instruction set (which can give an extra speed boost in some applications).
And in a few rare cases, your motherboard will directly refuse to boot the new CPU, regardless of any BIOS patching you may attempt. In that case, you’re out of luck.
Before attempting to transform your LGA775, search the web and check if your chipset will be happy with a Xeon CPU.
In any case, bear in mind that your mobo needs to support the speeds that your specific Xeon choice requires: voltage, FSB speeds, etc. Otherwise you’ll have to resort to underclocking the CPU (sad), or to overclocking your motherboard/ram (yay! but not recommended).
First, open your tower, remove the heatsink, then the CPU:
Now we do what the title says: we hack the LGA775 socket. Literally.
Yes, take a sharp knife or a cutter, and prepare to slash some plastic. The exact bits you have to cut off are the ones colored yellow in the LGA775 pinout diagram (scroll back to the beginning of this post). It should end up looking something like this:
The motherboard is ready!
Now we need to hack the Xeon CPU itself. Remember the blue pins that had switched places in LGA771? It’s time to revert what Intel did, and get a 775-compatible pinout layout.
If you’re good enough you could try to swap them yourself, using whatever technique you come up with. But the rest of us mortals will resort to buying a ready to use swapper sticker. Search for “775 771 mod” in ebay, play safe and buy several of them, just in case you break one in the process:
So there’s that. Now we simply have to put the Xeon in the LGA, add thermal paste, heatsink, etc:
Finally, plug the PSU, pray to Flying Spaghetti Monster, and boot the system!
Here’s a nice comparison graph of the results. The contenders are:
- A Core2Duo E4300 (LGA775), at speeds ranging from 1.8GHz (stock) up to 3.01GHz (overclocked).
- A Xeon E5440 (LGA771), at stock speed (2.83GHz).
The benchmarks are:
- Assetto Corsa, a multithreaded racing simulator (M30 Gr.A Special Event), FPS measured with my own plugin FramerateWatcher.
- PI calculator SuperPI, 2M variant, running in a single thread.
- Average maximum temperature reached by all cores, over a period of 15 minutes running In-place large FFTs torture test in Prime95.
The winner is the Xeon, as it should: specially in multithreaded programs, the Xeon obliterates the Core2Duo.
But it’s interesting to note that, even running at the same clock speed of around 2.8GHz, the Xeon outperforms the Core2Duo by more than 20% in single threaded applications.
I checked the FSB and RAM multipliers in both cases, just in case the Xeon had an advantage on that front, but it was actually the E4300 which had higher FSB and RAM clocks!
Goes to show how clock isn’t everything when it comes to performance, and obviously better CPU technology consists of more than higher clock freq and greater number of cores.
So that’s it, that’s the story of how I more than doubled the framerates in games and halved compilation times for the cost of 4 movie tickets.
Hope you enjoyed reading this article as much as I definitely did not enjoy proof-reading it! 😉08.7.13
Everyone knows that Opera is better than Firefox, Vim+Bash is better than any IDE, and 4-space indenting is better than tabulators. Having established those undisputed facts of life, let’s revisit some common fuel for flame wars: the mighty 80 character line limit.
Some people propose 80 chars, others 79, 100, 120, 132 , and many values in between.
Some people propose it should be a soft limit (meaning you can freely ignore in rare cases), while others set pre-commit hooks to stop any infringing line from reaching repositories.
None of this really matters.
- Better readability when opening two files side by side (for example, a diff)
- Quicker to read (same reason why newspapers use many columns of text, instead of page-wide paragraphs: human brain scans text faster that way).
- It forces you to extract code into separate functions, in order to avoid too many nesting levels.
- It prevents you from choosing overly long symbol names, which hurt readability.
I propose that those advantages are real and desirable, but should not be achieved through arbitrary line length limits. On the contrary, I propose that coders should not waste time formatting their source code: their tools should do it when possible. After all, we use text editors, not word processors! The problem is that, unfortunately, most text editors are too dumb.
Fix the editors, and you fix the need for line length limits.
If you edit your code in vim, you’re in luck, thanks to the Breakindent functionality. Here’s some side-by-side comparisons of 80-char line limit vs. unformatted text with BreakIndent enabled:
All in all: when compared to an editor featuring smart indenting, the 80-char lines artificially limit how you can resize your own windows, with no appreciable gain, and in most cases forcing you to waste many pixels of your carefully chosen 27″ dual-screen coding setup.05.17.13
“Those who cannot remember the past are condemned to repeat it”
— Jorge Agustín Nicolás Ruiz de Santayana y Borrás
Over the past few years, a number of “graphic terminal” emulator software have emerged. Some examples:
This is nothing new, in fact it was possible back in the 70’s, and you can try it using XTerm, the default terminal emulator bundled with X installations since forever!
The process is very simple, you simply have to run:
$ xterm -t -tn tek4014
Which will start an xterm emulating a TEK4014 terminal (instead of the default VTxxx plain-text terminal).
Now we’ll download some images we want to display. These 40 year old terminals don’t support JPEG though (it didn’t exist back then), nor any popular modern image format, so we’ll have to provide images in a format they understand. Plotutils includes a couple of these vectorial images, so we will run:
# apt-get install plotutils
And finally it’s simply a matter of feeding the Tek4014 terminal with an image, for example:
$ zcat /usr/share/doc/plotutils/tek2plot/dmerc.tek.gz
The terminal will be fed with an appropriate escape character sequence, along with the actual image contents, it’ll interpret it as an image (just like other escape sequences are interpreted as colored or underlined text), and the awesome result will be this:
How cool is that? 🙂
You can even resize the terminal window, and the graphics will be re-rendered with the correct size (remember it’s a vectorial image, so wen can zoom in indefinitely).
Here’s a quick snippet that you can add to your .vimrc in order to get:
- MS Visual Studio-like ‘current word‘ highlighting.
- Trailing space highlighting.
The result looks like this:
And the code is:
1 2 3 4 5 6 7 8 9 10 11 12
function Matches() highlight curword ctermbg=white ctermfg=black cterm=bold gui=bold guibg=darkgrey try call matchdelete(w:lastmatch) unlet w:lastmatch catch endtry silent! let w:lastmatch=matchadd ('curword', printf('\V\<%s\>', escape(expand(''), '/\')), -1) highlight eolspace ctermbg=red guibg=red 2match eolspace /\s\+$/ endfunction au CursorMoved * exe 'call Matches()'
Yes, one day I might convert it to a vim plugin, meanwhile just copy-paste to your .vimrc.
So sometime around early 2011, I took two afternoons and played with these technologies. Just now I remembered the project I had in hands, and decided I could give it a name and publish it on the net for your personal amusement.
Consider it pre-alpha, and expect bugs! 🙂
- Infinite landscape, using procedural generation (how else could I squeeze infinity into a few KBs?), and adaptive terrain features based on play style (namely, how fast you like to drive).
- Somewhat realistic physics (based on Box2dJS library).
- Incredibly detailed graphics engine based on WebGL. Nah just kidding, it’s the default HTML5 canvas-based rendering provided by Box2D itself…
- Physically-modelled rolling stones on the driving surface. Framerate suffers too much so they’re disabled by default. To re-enable, dive in the source code and hack away.
- Tested on major PC browsers, and on Dolphin Browser Mini on Android.
- right-arrow -> gas
- left-arrow <- brake
Right now I’m in the middle of a physical home migration so the code is not githubbed yet, but you can access it by clicking the following link:
There’s no purpose as of yet, but you can try to race against the terrain and see how far you last before ending up on your roof or suffering a physics explosion.
License is GPLv3.06.18.12
So you’ve just come back from vacations (wohoo), having filled 10 gigs of photos and video, only to discover you’re a (let’s be honest here) shitty cameraman without your tripod?
Fret not, for this article will show you the secret to solve your problems!
In an ideal world, your hands are as steady as a rock, and you get Hollywood quality takes. In the real world, however, your clumsy hands could use a hand (hah!).
So here’s your two main options:
Hardware solution (for use while filming)
This is the proper solution: a system that will compensate for the vibration of your shaky hands and the movement of your body while walking – not unlike the springs on your car allow for a pretty comfortable ride through all sorts of bumps.
Ideally, it will compensate for all the 6 axis (3D traslation + 3D rotation), but in practice you may be limited to less than that. Unfortunately (for most), this depends on how deep your pockets are (buying a ready-to-use steadicam, ranging from 100 bucks to several thousand), or on how handy you are with your toolbox (building a home-built equivalent).
The result could (in theory) be similar to this:
(ah, yeah… a segway, minor detail)
Software solution (for use after filming):
If you can’t spare a segway + a steadycam backpack, there are affordable alternatives. And if you already have many shaking, blurry videos lying on your hard disk, then this is your only option!
We’ll rely on PC software to fix those videos. This, you can do for free at home. There are some payware software packages that may produce slightly better results: but what I’m going to show you is freeware, very quick to use, and good enough quality for most purposes.
The software method may not be that good when compared to an actual steadicam, but hey, it’s better than nothing!
I’m not going to go much into details, so here’s the basics.
- Download VirtualDUB, an open source and free video editing software.
(Make sure you can open your videos. E.g. you may need to install the ffdshow-tryout codecs and set them up, or whatever; Google is your friend! 🙂 )
- Once you can open your videos, you have to download the magic piece of the puzzle: Deshaker.
(This free tool – though unfortunately not open source – will do all the important work)
- Now open your video, add the Deshaker video filter, choosing “Pass 1“.
(If you have a rolling shutter camera (most likely), and know its speed (unlikely), you can also correct it by entering the necessary values in there)
- Click OK, and play the video through.
(This will gather information about motion vectors and similar stuff, in order to find out how to correct the shaking, if present)
- Now edit the Deshaker video filter settings again, and choose “Pass 2“. Tweak settings at will, and click OK.
(A progress window will be visible for just a few moments)
- Finally, export the resulting video, and you’re good to go!
For a more detailed guide (including rolling shutter values for some cameras), just read the official Deshaker page, or browse Youtube; there’re some tutorials there too.
The settings basically tune the detection of camera movement, as well as what method will deal with the parts of the image that are left empty after deshaking.
The video below is an example I’ve cooked for you. Each of the 3 processed videos uses a different combination of settings, and was created in no more than 20 minutes each.
I sticked them all together for your viewing pleasure. The improvement can be easily appreciated!
That’s it. Happy filming! 8)
If you insist on using hardware solutions (good!), here’s a neat little trick that’ll allow some smooth panning (provided you’re not walking):01.21.12
Have you ever run out of space, and decided to offload some of the bigger files to a second (or third…) disk? Sometimes even an external drive?
Are you tired of having to look around all disks in order to find that specific file?
What can AwesomeMounter do?
Here’s an example. You’ve got 3 places where you store music: your Linux home (50 gigs), a Windows partition (100gigs), and an 8 gigs USB pendrive:
/home/foo/music/(has reggae and jazz subdirectories)
/mnt/windows/data/mp3/(has pop and jazz subdirectories)
/media/pendrive/songs/(has hiphop and electro subdirectories)
And you want all the music in a single place. Namely:
/music/(will contain all the aforementioned music)
With AwesomeMounter, you can access music through the 3 original paths, or through
/music/, like this:
/music/ (total size: 50+100+8 = 158 gigs!)
How cool is that?
With AwesomeMounter, you gain the ability to disconnect (or umount) any drive at any time.
Reciprocally, whenever a previously set-up drive is connected or mounted again to your system, the data will automatically reappear where the AwesomeMounter config files tells it to.
This makes it perfect in combination with external drives, such as pendrives, USB hard disks, cellphones, memory cards…
Continuing the example: the USB pendrive at
/media/pendrive/, can be removed as you normally would (e.g. right click -> unmount drive), and /music/hiphop and /music/electro will automatically disappear from /music.
(a.k.a. what happens when I write stuff in there?)
Any new file you try to write, will automatically be put in the drive with the most free space.
E.g.: If your pendrive is full, data will be stored somewhere else.
If you need to write stuff to a specific drive, you can bypass this awesome automatic storage balancing. by simply using the corresponding original path at
/media/pendrive/*, instead of the joined path that AwesomeMounter made available to you at
How do I use it?
First, copy the script anywhere on your disk, and give it execution permissions.
Then, create a configuration file at ~/.awesomemounter/config. Example file:
########################## # some configured dirs: /music /home/foo/music,/media/pendrive/songs /movies /home/foo/videos,/media/bigHDD /downloads /home/foo/incoming,/mnt/windows/p2p # you can nest mounts w/out problems /video /movies,/downloads/series,/home/foo/docus ##########################
Then simply run awesomemounter from command line (you may be prompted for root access):
Where do I get it?
Don’t forget to create the config file!
DISCLAIMER: Use at your own risk. If your house burns down because of it, don’t blame me. Instead, call the firefighters and only then don’t blame me.04.27.11
This articles tries to give a brief introduction to that ‘Wave‘ thingy everyone used to talk about, explaining the concept, history overview and even how to get it running by yourself.
Quick history lesson
At the Google I/O conference on May 27, 2009, Google announced this new communication concept: “Wave”. Most technical people attending the event “got it” right away and applauded.
Later that year, Wave was open for testing through the typical invite system. Google Wave, however, was still half baked and unusable. Most of the critics dismiss it as yet another unnecessary social network, a solution waiting for a problem, etc.
Nearing the end of 2010, statistical analysis at Google showed the public reception wasn’t as good as expected. Google decides to pull the plug and open source parts of it, deviating resources to other projects.
Before the year 2010 ends, with many parts of it open sourced by Google, the small dev community gathers the pieces, and starts the new Apache Wave project.
Nowadays, in 2011, Apache Wave is actively developed by the open source community, and an alpha version can be easily run in your computer.
Cool… what was “wave” again?
For the computer literates, here you have two easy to understand comparison tables, using email as a reference:
|Concept description||E-Mail term||Wave term|
|A piece of information||an email||a wave|
|The act of sending a piece of information||to email someone||to wave someone|
|Protocol||SMTP, POP3, IMAP...||Wave Protocol|
|Interaction between different servers||ability to send email from one @domain to another @domain||wave server federation|
|Development of the project concepts and reference software||IETF + independent developers?||Apache Wave|
|Software description||E-Mail software||Wave software|
|Proprietary server+webclient package||Yahoo! Mail||Google Wave|
|Open source server+webclient package||Zimbra||Wave In A Box|
|Open source server||Exim||Google FedOne|
|Open source client||Thunderbird||Google Wave-Splash|
|Open source webclient||RoundCube||Micro-Box|
What can Wave be used for?
Wave aims to be a common denominator to many other communication forms. An open standard that anyone can use and implement (even in the form of proprietary servers or clients, like Google Wave). Let’s see an example:
In a common use case, your internet workflow could involve:
- An email client running on your desktop
- Twitter client
- Facebook tab
- An feed reader
- Receiving messages from two mailing lists
- Manually checking some random movie forums weekly for new posts
- Get notified by email of replies to some blog post comments you wrote
In the wave world case, your internet workflow would involve:
- A Wave client
- Or, alternatively, go the old route: keep using the very same specific clients for each of those services, even if they use the Wave Protocol under the hood (just like Facebook Chat and GTalk run on top of Jabber).
Most importantly, and this cannot be stressed enough:
You are free to choose which clients to use as interface.
And you are also free to choose which servers to use for storing your data waves.
Now try doing that with Facebook, Twitter, Flickr…
Test a WaveInABox demo now
So you want to test the open source Apache Wave software? The wave community runs some test servers and clients on the net. The most common one is located at http://waveinabox.net, and is updated daily.
Disclaimer: WaveInABox server and client are still in very early development stage, so do not rely on them at all, and do not expect everything to work correctly.
Or deploy your own WaveInABox
Maybe you want to test it locally, perhaps play with the code, or even run it privately for personal purposes. In that case, it’s really easy to get it up and running in Linux:
# apt-get install mercurial ant default-jdk eclipse
$ hg clone https://wave-protocol.googlecode.com/hg wave-in-a-box
$ cd wave-in-a-box
$ ant compile-gwt
$ ant dist-server
$ ant -f server-config.xml -Dwave_server_domain=$HOSTNAME -Dsigner_info_store_type=file -Daccount_store_type=file -Ddelta_store_type=file -Dattachment_store_type=disk
(there’s instructions for Windows and MacOSX too)
At this point, the server is running, and the web client can be accessed at http://localhost:9898.
Even if this article is very shallow, I hope it provides a different perspective of the whole subject, and helps people see the actual purpose behind the waves.
By the way, I’m reachable at
firstname.lastname@example.org. Feel free to wave me any time! 😉