oscarbonilla.com

Could this recent Apple Bug be a bad merge?

Adam Langley’s explanation of the recent Apple security bug in iOS makes me suspicious of whatever SCM Apple uses for developing iOS. The problem was in this function:


static OSStatus
SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams,
                                 uint8_t *signature, UInt16 signatureLen)
{
	OSStatus        err;
	...

	if ( (err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
		goto fail;
	if ( (err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
		goto fail;
		goto fail;
	if ( (err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
		goto fail;
	...

fail:
	SSLFreeBuffer(&signedHashes);
	SSLFreeBuffer(&hashCtx);
	return err;
}

Note the two goto fail; lines? I have seen many of those when using diff3 for merging code. What happens is that parallel modifications are done to the same lines of code, and the hook for the second goto gets deleted in the merge.

It would be really interesting to know what SCM Apple uses for developing this code and whether the error was in fact the result of a bad merge.

Disclaimer: This blog post is pure speculation and I do not have any special information or insight. It could well be the case that someone accidentally pasted the second goto fail line, or even typed it.

Written by ob

February 22nd, 2014 at 1:15 pm

Posted in Uncategorized

Why I don’t trust rankings

From a recent article on Slate about how U.S. News ranks universities:

U.S. News changed the scores last year because a new team of editors and statisticians decided that the books had been cooked to ensure that HarvardYale, or Princeton (HYP) ended up on top.

Unacceptable! We count on the objectivity of the rankings of U.S. News to decide where to go to college!

So, last year, as U.S. News itself wrote, the magazine “brought [its] methodology into line with standard statistical procedure.” With these new rankings, Caltech shot up and HYP was displaced for the first time ever.

Yay for science!

But the credibility of rankings like these depends on two semiconflicting rules. First, the system must be complicated enough to seem scientific. And second, the results must match, more or less, people’s nonscientific prejudices. Last year’s rankings failed the second test.

No, no! Wait! That’s not how science works!

So, Morse was given back his job as director of data research, and the formula was juiced to put HYP back on top.

Wait! What? You pick the result you want and tweak the numbers until you get it?

The fact that the formulas had to be rearranged to get HYP back on top doesn’t mean that those three aren’t the best schools in the country, whatever that means.

No, but it means the whole thing is bullshit and you’d be better off going with your prejudices since the ranking is tweaked to validate your own preconceptions of quality.

I like how the article ends thou:

If the test of a mathematical formula’s validity is how closely the results it produces accord with pre-existing prejudices, then the formula adds nothing to the validity of the prejudice. It’s just for show. And if you fiddle constantly with the formula to produce the result you want, it’s not even good for that.

And that is why I tend to avoid looking at rankings or any kind of measure I don’t fully understand.

Written by ob

September 12th, 2013 at 11:09 am

Posted in Math

Bayes’ Theorem Using Trimmed Trees

A reader sent me the link to the following video:

I think it does a great job of walking you through multiple applications of Bayes’ theorem. I find it easier to use the equations, but we all learn in different ways so I thought it was worth a link.

Written by ob

January 28th, 2013 at 11:55 pm

Posted in Math

Wickr: Private Social Networking

Back in 2007 I wrote about how I distrusted Facebook. Now there is a new startup building a social sharing service with strong privacy guarantees. Their name is Wickr and their app is available now at the app store.

Two things would make me feel more comfortable about them thou. First, What’s their business model? We know Facebook’s business model and even if you think it’s evil, it’s a known evil. Their website claims they will offer add-on services later on. But if enough people start using it, the demands on the servers will still cost money.

Second, what is their “patent-pending Digital Security Bubble” algorithm. They clam it uses AES-256 and RSA-4096. But how does it work exactly? I’m a bit surprised they used RSA-4096, the only secure way to generate the keys is on the iPhone itself, but generating a good RSA-4096 key is sloooow… although it only needs to be done once.

The one bit of criticism I have is this: “RED UI??? Really?”.

I also know about Glassboard, who everybody raves about but my main concern is not protecting myself from my friends, it’s protecting myself from the companies that run the services. Glassboard has access to your information and even thou their privacy policy isn’t bad, they could be acquired by Facebook (see Instagram).

At any rate, I really hope they succeed and get a good UI guy.

Written by ob

June 27th, 2012 at 10:41 am

Posted in Cryptography

Fun with Caches

A recent thread about latency numbers on Hacker News reminded me that I had meant to write a few blog posts about caches that I never did get around to. So I took a trip down to the cellar (what I call my “Drafts” folder), sniffed this post and decided it’s ready.

I first saw the technique used for generating these latency numbers in exercise 5.2 on page 476 of Computer Architecture: A Quantitative Approach[1] by Hennessy and Patterson.

The basic idea is that you write a program to walk a contiguous region of memory using different strides and time how long accessing the memory takes. The idea for this exercise comes from the Ph.D. dissertation of Rafael Héctor Saavedra-Barrera, where he describes the following approach:

Assume that a machine has a cache capable of holding D 4-byte words, a line size of b words, and an associativity a. The number of sets in the cache is given by D/ab. We also assume that the replacement algorithm is LRU, and that the lowest available address bits are used to select the cache set.

Each of our experiments consists of computing many times a simple floating-point function on a subset of elements taken from a one-dimensional array of N 4-byte elements. This subset is given by the following sequence: 1, s + 1, 2s + 1, …, N – s + 1. Thus, each experiment is characterized by a particular value of N and s. The stride s allows us to change the rate at which misses are generated, by controlling the number of consecutive accesses to the same cache line, cache set, etc. The magnitude of s varies from 1 to N/2 in powers of two.

He goes on to note

Depending on the magnitudes of N and s in a particular experiment, with respect to the size of the cache (D), the line size (b), and the associativity (a), there are four possible regimes of operations; each of these is characterized by the rate at which misses occur in the cache. A summary of the characteristics of the four regimes is given in table 5.1.

And table 5.1 helpfully summarizes the regimes.

Size of ArrayStrideFrequency of MissesTime per Iteration
1 ≤ N ≤ D1 ≤ s ≤ N/2no missesT
D < N1 ≤ s < bone miss every b/s elementsT + Ms/b
D < Nb ≤ s < N/aone miss every elementT + M
D < NN/a ≤ s ≤ N/2no missesT

I thought it would be fun to try it, so I wrote a program to do that. If you want to download it, go to github, but the guts of it is this function:


void
sequential_access(u32 cache_max)
{
    u32 register	i, j, stride;
    u32 steps, csize, limit;
    double	sec0, sec;

    for (csize=CACHE_MIN; csize <= cache_max; csize*=2) {
	for (stride=1; stride <= csize/2; stride=stride*2) {
 	    sec = 0.0;
 	    limit = csize - stride + 1;
 	    steps = 0;
 	    do {
 		sec0 = timestamp();
 		for (i=SAMPLE*stride; i > 0; i--) {
		    for (j=1; j <= limit; j+=stride) {
			buffer[j] = buffer[j] + 1;
		    }
		}
		steps++;
		sec += timestamp() - sec0;
	    } while (sec < 1.0);

	    printf("%lu\t%lu\t%.1lf\n",
		   stride * sizeof(u32),
		   csize * sizeof(u32),
		   sec*1e9/(steps*SAMPLE*stride*((limit-1)/stride+1)));
	    fflush(stdout);
	}
	printf("\n\n");
	fflush(stdout);
    }
}

As you can see, it’s a very simple program that times a loop accessing a cache in different strides. Just like the exercise in Hennessy and Patterson and just like the description in Saavedra-Barrera’s dissertation.

I ran this program on my machine after rebooting in single-user mode like Hennessy and Patterson suggest so that virtual addresses track physical addresses a little closer, and with a little help from gnuplot, I got this:

Latency Mac OS X - Sequential

Sequential Access

You can tell a lot by just glancing at that graph. You can see what the cacheline size is, what the sizes of the L1 and L2 caches are, what the page size is, the associativity of the cache, the TLB size.  Here is the data used to create that graph and here is the script.

One interesting tidbit is that modern processor architectures cache aggressively, so you need to randomize the accesses to memory to get accurate timings.

Latency Mac OS X - Random

  1. the second edition []

Written by ob

June 5th, 2012 at 5:32 pm

Mersenne Primes

In 1653, Marin Mersenne, of Mersenne Primes fame, made the bold claim that 2^{67}-1 was a prime number. That claim remained unchallenged for 250 years – no computers back then – until…

…in 1903, Frank Nelson Cole of Columbia University delivered a talk with the unassuming title “On the Factorization of Large Numbers” at a meeting of the American Mathematical Society.  ”Cole – who was always a man of very few words – walked to the board and, saying nothing, proceeded to chalk up the arithmetic for raising 2 to the sixty-seventh power,” recalled Eric Temple Bell, who was in the audience. “Then he carefully sustracted 1 [getting the 21-digit monstrosity 147,573,952,589,676,412,927]. Without a word he moved over to a clear space on the board and multiplied out, by longhand,

193,707,721 \times 761,838,257,287

“The two calculations agreed. Mersenne’s conjecture – if such it was – vanished into the limbo of mathematical mythology. For the first…time on record, an audience of the American Mathematical Society vigorously applauded the author of a paper delivered before it. Cole took his seat without having uttered a word. Nobody asked him a question.”[1]

Now I know where Professor Felton found his inspiration.

You’ve probably heard of Felton (National Academy of Science, IEEE Past President, NRA sustaining member). My advisor told me later that Felton’s academic peak had come at that now-infamous 1982 Symposium on Data Encryption, when he presented the plaintext of the encrypted challenge message that Rob Merkin had published earlier that year using his “phonebooth packing” trap-door algorithm. According to my advisor, Felton wordlessly walked up to the chalkboard, wrote down the plaintext, cranked out the multiplies and modulus operations by hand, and wrote down the result, which was obviously identical to the encrypted text Merkin had published in CACM. Then, still without saying a word, he tossed the chalk over his shoulder, spun around, drew and put a 158grain semi-wadcutter right between Merkin’s eyes. As the echoes from the shot reverberated through the room, he stood there, smoke drifting from the muzzle of his .357 Magnum, and uttered the first words of the entire presentation: “Any questions?” There was a moment of stunned silence, then the entire conference hall erupted in wild applause. God, I wish I’d been there.[2]

 

  1. The Man Who Loved Only Numbers by Paul Hoffman []
  2. Auto-weapons by Olin Shivers. []

Written by ob

June 12th, 2011 at 6:18 pm

Posted in Math

Radiation

In light of the events at the Fukushima-daishi nuclear plant I had to go revisit my notes on nuclear physics to make sense of the news. I thought I’d share my notes here in case they prove useful to somebody else. Dear physicist friends, please critique away.

Just what exactly is radiation?

Let’s start with the classic model of the atom: we have a nucleus composed of protons and neutrons and a cloud of electrons surrounding the nucleus. The charge of the atom is balanced because the number of protons is the same as the number of electrons. How many protons the nucleus has determines the kind of element you have.

 

Periodic Table of Elements

See those numbers at the top of each box? That’s the number of protons in the nucleus. Note however that since neutrons have no charge, their number can vary and the atom’s electric charge will still be balanced. Chemically, it will behave the same. But atomically it is different. These atoms that have a different number of neutrons in the nucleus are called isotopes.

You might recall that electrically, like charges repel and different charges attract. Since all protons are positively charged, why doesn’t the nucleus disintegrate? The answer has to do with a force much more powerful than the electromagnetic force that makes protons repel each other, but that acts only over very short distances, the strong-nuclear force.

All those neutrons inside the nucleus act like a kind of cement, binding the protons together. Also the protons both attract each other (strong nuclear force) and repel each other (electromagnetic force) and both forces diminish rapidly the farther apart the protons are. However, the strong nuclear force diminishes much, much faster than the electromagnetic force. Move the protons too far apart and the strong nuclear force starts losing out to the electromagnetic force. That is exactly what happens when you have too many neutrons in the nucleus. They push the protons farther apart, to the point that the strong nuclear force is not enough to keep the nucleus together since the electromagnetic force is pushing the protons apart. This is a long way of saying that most nuclei with too many neutrons turn out to not be very stable.

So what happens with an unstable isotope? It emits particles from the nucleus to stabilize itself. These particles can be one of two kinds:

  1. Two protons and two neutrons (a Helium nucleus) shoot out from the nucleus of the isotope. This is called an Alpha particle, and it has a positive charge (two protons and no electrons).
  2. A neutron emits an electron (the neutron turns into a proton) or a proton emits an anti-electron (positron) and turns into a neutron. These electrons or positrons emitted from the nucleus are what are called beta particles.

Once an unstable nucleus has emitted alpha or beta particles, it remains in an excited state (at a higher than normal energy level), but this doesn’t last very long. What happens is that this extra energy is released in the form of gamma rays, which are very high energy photons.

And what happens to the nucleus? Well, since each alpha particle emitted makes it lose two protons, it drops by two squares in the periodic table. Uranium-238 becomes Thorium-234 by emitting an alpha particle and some gamma rays. And since each beta emission either turns a neutron into a proton or viceversa, the element can “decay up” to the next element in the periodic table or “decay down” to the previous element. Thorium-234 becomes Protactinium-234 by the emission of a negatively charged beta particle. Here is the full decay chart of Uranium-238.

Decay Chart for Uranium 238

So what is radioactivity? It is the emission of alpha and beta particles plus some gamma rays from unstable isotopes that are decaying into more stable isotopes.

Now, we don’t know exactly when a particular nucleus will decay, it has about equal probability of decaying or not decaying at any particular time (Hi Schrödinger!), but we do know that if we have a certain amount of material, eventually enough of the nuclei will decay that we are going to be left with about half of the original material and half of whatever it decays to. How long will this process take? It depends on the specific isotope, but this is what is called the half-life or the isotope.

Elements have ‘isotopes’, which are atoms with the same number of protons in the nucleus but with a different number of neutrons. These isotopes are often unstable, decaying into other more stable isotopes by emmitting alpha particles, beta particles, and gamma rays. For any given amount of material, how long it takes for half of the nuclei to decay is called the half-life of the isotope.

Why is Radiation Dangerous?

You might have heard the term ‘ionizing radiation’. What is ionizing radiation? Well, remember those alpha particles, beta particles, and gamma rays? They have a lot of energy. In fact, they have so much energy that if they collide with another atom, they can easily knock out an electron from it. This leaves the atom positively charged or ‘ionized’. The problem with ionized atoms is that since they are no longer electrically neutral, they tend to combine with other atoms forming ionic bonds.

So what does ionizing radiation do to the body? Well, DNA’s amino acids are mostly composed of Hydrogen, Oxygen, Nitrogen, Carbon, and Phosphorous. If one of these lose particles goes knocking off electrons from atoms,  the electrical properties of said atoms change, altering the chemical properties of the molecules they belong to, which can damage the DNA.

DNA Structure

Damaged DNA can result on one of three outcomes:

  1. The injured cell can repair itself, resulting in no residual  damage.
  2. The injured cell dies. Not much different than millions of cells  that die and get replenished every day.
  3. The injured cell incorrectly repairs itself, resulting in a  mutation.

It’s this third case that we are generally worried about.

Now, remember that in order to damage the cell, a given particle needs to actually hit it. Let’s take a look at the particles and have some idea of their energy levels.

Alpha particles

Remember that these are Helium nucleus, so two protons and two neutrons. What this means is that they are big (compared to beta particles which are electrons and gamma rays which are photons). The energy an alpha particle carries depends on the kind of isotope that emitted it. The heavier elements emit higher energy alpha particles. In general they have between 3 and 7 MeV (Mega-electron-volts). Even though this is a lot of energy for a single particle, they are also pretty massive (a proton has roughly 2,000 times the mass of an electron) so they don’t go that fast. They are also positively charged, which means they get deflected by magnetic fields.

Because of all these reasons, alpha particles are not particularly dangerous unless you ingest the isotope producing them. If you just happen to be around them, they won’t even penetrate your skin, and they can be blocked using a sheet of paper (so much for an alpha-particle death ray).

However, if you do ingest them, they are pretty nasty as alpha radiation is one of the most destructive forms of ionizing radiation. The damage it can cause is calculated to be between 10 to 1000 greater than beta or gamma radiation. One example of this is polonium-210, which is typically found in cigarettes. You smoke it, it generates alpha radiation inside your lungs, you get cancer. That’s how they whacked Alexander Litvinenko, they poisoned him with Polonium-210.

Beta Particles

Beta particles are much less massive than alpha particles, so they are about 100 times more penetrating. Beta radiation can be blocked using a piece of aluminum (get out the tinfoil hats!). They are typically used in medicine precisely because they penetrate the human body so well. For instance, PET scans use a radioactive tracer isotope as a source of positrons. So yeah, you get irradiated with them for medical reasons.

Gamma Rays

Gamma rays are a bit nastier than both alpha and beta particles because they are electrically neutral, they are much more penetrating (what stops a particle other than colliding is the electromagnetic fields). They tend to cause more cell damage when they have about the same energy as an alpha particle (between 3 and 5 MeV).

Measuring Radiation

So how do we measure radiation? This is where the story gets complicated. Radiation is measured using a mishmash of different units, depending on what exactly you are measuring.

At the most basic level, you measure radioactivity. Namely the amount of ionizing radiation released by a material. This measure doesn’tmake a distinction between alpha or beta particles or gamma rays. It just measures how many nuclei are decaying per unit of time (usually seconds). The old unit of measurement was the Curie (Ci) and the new one is the Becquerel (Bq). One Curie is equal to 37 billion (3.7 x10^10) disintegrations per second. One Becquerel is one disintegration per second. It follows that one Curie is equal to 37 billion Becquerels.

The other measurement is exposure: the amount of radiation traveling through the air. The units of exposure are the Roentgen (R) and Coulomb/Kilogram (C/kg). One Roentgen is the amount of radiation necessary to produce a charge of 0.000258 C/kg under standard conditions.

The absorbed dose is the amount of radiation absorbed by an object. I.e. the amount of energy deposited in the object through which the radiation passed. The units are radiation absorbed dose (rad) and gray (Gy). An absorbed dose of 1 rad means that 1 gram of material absorbed 100 ergs of energy. One gray is equivalent to 100 rad.

Finally, the effective dose combines the absorbed dose and the medical effects of abosorbing that type of radiation. This is because living cells respond different to alpha particles than to beta particles or gamma radiation. Alpha particles tend to be more damaging than beta particles or gamma rays. The units for the effective dose are the Roentgen equivalent man (rem) and the Sievert. The rem is just the roentgen adjusted by the amount of damage the specific type of radiation does the to body. For instance, the weighting factor for beta particles and gamma rays is one. For alpha particles it is 20. The Sievert is equivalent to 100 rem.

For healt and risk purposes, the only measure we care about is the Sievert (or rem). Getting sprayed with alpha particles is much more damaging than getting sprayed with beta particles (in fact, 20 times  more damaging).

On average we receive a dosis of 0.62 rem (6.2 mSv) every year. About half of it comes from natural background radiation, and the other half from man-made sources (X-Rays for instance).

Randall Munroe has helpfully created a chart of typical amounts of the effective dose of radiation measured in Sieverts. I suggest you take a look.

Don’t pay attention to Becquerels, rems or any other unit. Sieverts is what you care about. Look at Randall Munroe’s chart to get a sense about how much is too much.

Radioactive Fallout

Inside a nuclear reactor, there are all sorts of isotopes being created and decaying. The ones you will mostly read about are Iodine-131, Caesium-137, and Strontium-90.

Iodine-131

This isotope has been on the news as of lately. Its half-life is just about 8 days, and it typically decays by emitting beta particles and gamma rays. It tends to enter the food chain and if you eat it it accumulates in the thyroid gland. That’s why they give you iodine tablets for radiation sickness, not because it does anything to stop the radiation, but because it saturates your body with non-radioactive iodine so your thyroid gland doesn’t accumulate the radioactive one.

Caesium-137

Caesium-137 has a half-life of about 30 years, so it is more problematic than Iodine-131 which decays rapidly. However, once it has entered the human body, it gets uniformly distributed and has a biological half-life of about 70 days. Like Iodine-131 it decays by emitting beta and gamma radiation.

Strontium-90

Strontium-90 has a half-life of about 28 years and it is normally referred to as a “bone seeker”. This is because it is biochemically similar to calcium, so after entering the organism, about 30% of it gets accumulated in the bones and bone marrow. The rest gets excreted.

 

Written by ob

April 12th, 2011 at 5:11 pm

Posted in Physics

Distrust

In light of recent news:

The hacker, whose March 15 attack was traced to an IP address in Iran, compromised a partner account at the respected certificate authority Comodo Group, which he used to request eight SSL certificates for six domains: mail.google.com, www.google.com, login.yahoo.com, login.skype.com, addons.mozilla.org and login.live.com.

The certificates would have allowed the attacker to craft fake pages that would have been accepted by browsers as the legitimate websites. The certificates would have been most useful as part of an attack that redirected traffic intended for Skype, Google and Yahoo to a machine under the attacker’s control. Such an attack can range from small-scale Wi-Fi spoofing at a coffee shop all the way to global hijacking of internet routes.

At a minimum, the attacker would then be able to steal login credentials from anyone who entered a username and password into the fake page, or perform a “man in the middle” attack to eavesdrop on the user’s session.

And because it is not the first time COMODO has screwed up, I’ve decided to turn off their root certificate from my browser (Safari). Here’s how you do that.

  1. Open Keychain Preferences (in /Applications/Utilities).
  2. Click on “System Roots” on the left pane.
  3. Seach for “COMODO”.
  4. Rigth-click on the certificat and select “Get Info”.
  5. Select “Never Trust”.

I’ve just done this so we’ll see if it has any effect on my general surfing experience.

Written by ob

March 31st, 2011 at 10:54 am

Posted in Uncategorized

Harry Potter and the Methods of Rationality

Have you ever wondered whether some books would be better if the author had rewritten them after they were done? I have wondered that about the Harry Potter books.

I read the original Harry Potter books back when they came out, and even though I found them quite entertaining, they have many flaws. They got progressively worse as their universe became more complicated and they tried to have a deeper story. But it definitely shows in the books that J.K. Rowling did not have a “grand plan” and was basically just making stuff up as she went. In the words of nexes300:

Rowling showed absolutely no planning of the universe past the third book and added things as she liked. Also, Hogwarts is a failure of a school, and Harry and his friends are terrible magic users.

I also disliked the unidimensional characters. In Harry Potter’s universe you are either good or evil, there are no in betweens. Still, the books are entertaining.

Last weekend I had a cold, and I serendipitously got a tip from the brown dragon about a new fan fiction Harry Potter book.

…if you are the type of person who read Harry Potter and thought:

  • Set up experiments to test magic
  • Witches turn into CATS? …But_…_but_…_but… What about conservation of mass?
  • Gringots deals in gold? There are arbitrage opportunities between muggles and the wizarding world!
  • I have a Time turner? I can try to prove that P == NP

And when I saw who the author[1] was, I had to read it.

Harry Potter and the Methods of Rationality is the best Harry Potter book that I’ve read. If you enjoyed the original series and you like science I cannot recommend it enough[2] .

  1. Yeah, that Yudkowsky,  perhaps you remember him from here. []
  2. With just one caveat, the work isn’t finished. Don’t expect the story to end. []

Written by ob

December 23rd, 2010 at 1:42 am

Posted in Books,Math,Reviews

Doing it wrong

Couldn’t resist posting this one from xkcd:

'Dude, wait -- I'm not American! So my risk is basically zero!'

Written by ob

September 20th, 2010 at 10:14 am

Posted in Humor,Math

Tagged with ,