From a Microsoft tutorial:

If the parameter value is hard-coded, comes from a Web control on the page, or is in any other source that is readable by a data source Parameter object, for example, that value can be bound to the input parameter without writing a line of code.

What the hell is wrong with writing code? For pity’s sake we’re programmers. What’s wrong with actually typing something in to a text file? Since when did we as programmers get elevated above the dirty business of writing code?

As someone smarter than me put it Whatever happened to programming?

And if you really think that not writing code is somehow saving time, read the incredible sequence of wizard twiddling to perform this relatively simple task . How the *fuck* is that better than writing code?

http://kranzky.rockethands.com/2010/02/13/interzone-the-downward-spiral/

WTF of the year candidate.

Going through some old domains I haven’t found a use for yet, and I’ve decided to sell the following:

http://wickid.com

http://feelingshiny.com

http://ninjatek.com

Anyone interested? I’ll link to the auction site soon.

Here is the link – please download and rate if you think it will be helpful for your meditation needs!

See also:

http://monkeystylegames.com/taotimer

http://taoism.about.com/od/meditation/ht/standing.htm

Apple have some details on their developer site about using the “Grand Central Dispatch” system. It is interesting to note they mention both Mac OS X and iPhone OS as using the less-threaded, more-asynchronous design approach. Multi-core iPhones are inevitable I guess.

Edit: Further juicy details – Apple extended C in 10.6 to provide language support for a “Block” object – equivalent to a “lambda” in other languages. I found the extension specification on the LLVM site here.

I recently submitted some inline assembly versions of Matrix * Matrix, and Matrix * Vector functions for ARM NEON to the Oolong Engine project.

Here is the full source, note the functions assume column-major matrices.

In response to a question on this blog, here are a bunch of ARM/NEON/SIMD resources that I have accrued in my bookmarks over the last few months.

There are essentially three approaches in GCC,  which trade off power/flexibilty for ease of use.

  1. Assembly (standalone, or inline).
  2. Compiler intrinsics, and
  3. “Automatic” compiler vectorization.

The first link explains the differences:

Happy reading, and let me know if you find anything else useful.

The ARMv7 CPU (Cortex-A8) used in the iPhone 3GS is a very nice CPU. One of the things it can do is real SIMD intrinsics. Although Apple don’t document this, the fine folks who made GCC do.

Here’s how to demonstrate this on Xcode.

0. Create a project.

1. Set Target->Architecture to “Optimized (armv6 armv7)”

This builds a fat binary with two executables – one for the older arm architecture and one for the Cortex/NEON architecture.

2. Set Other C Flags to “-mfloat-abi=softfp -mfpu=neon”.

As specified in the “arm_neon.h” header. I’m guessing that these are ignored for the armv6 binary.

3. Include preprocessor guards in the source to make sure the intrinsics are only compiled in for armv7. See the following snippet:

#ifdef _ARM_ARCH_7
#include <arm_neon.h>
float32x4_t scale( float32x4_t v, float f )
{
  return vmulq_n_f32( v, f );
}
#endif

Note: At the moment I can only make this code compile under C, there seems to be an internal GCC issue when compiling this code as C++

4. Choose “Build->Show Assembly Code”. You should see a “vmul.f32    x,  y, z” assembly instruction buried amongst the stack maintenance code.

Good news for mobile gamers. NEON, together with proper shaders, should help the next wave of iPhone games leapfrog the quality of other handhelds.

A friend, in despair, recently asked:

“How do they manage to make software go slower at a rate that exceeds Moore’s Law?”.

I’ll take it as given that have we experienced those moments while waiting for files to “move to trash” on a 2.5Ghz machine.

Anyway, I glibly answered:

“Programmers are getting stupider at a rate that exceeds Moore’s Law.”

And left it at that.

But it kept bugging me. I don’t know the reason why software is getting slower. But programmers are getting stupider, here’s why:

1. The first point is that humans aren’t getting smarter, there’s no real natural selection for it. Perhaps external factors like education are decreasing stupidity slightly, but probably not much more than population growth in poor uneducated countries. So for arguments sake I’m going to say that the percentage of smart people stays constant, or at best increases very slightly.

2. In contrast, the number of programmers that exist has increased significantly in the last twenty years.

3. The last, and most important point is: as a group grows, it becomes more similar to the population as a whole.

Putting these ideas together we arrive at the awful truth. Fifty years ago, there were hardly any programmers. The programmers that did exist were brilliant – they had to be. What about twenty years ago during the PC revolution? How about ten years ago when computers became commodity items?

The sad fact is that, as programming becomes more and more popular, the stupidity of programmers must approach the stupidity of the average population.

At that is possibly why we have crap software. And “design patterns”, and “Object Oriented” and other fads that are designed to stop the bulk of programmers in an organization from doing too much damage, rather than lettting individuals come up with creative solutions or clean designs.

Yup. This is the first public release of the project, just a quick teaser on youtube: