Skip to content

Commit

Permalink
Import Ghost content
Browse files Browse the repository at this point in the history
  • Loading branch information
cqcallaw committed Oct 12, 2020
1 parent be21190 commit 675c080
Show file tree
Hide file tree
Showing 42 changed files with 1,463 additions and 0 deletions.
15 changes: 15 additions & 0 deletions content/blog/about/_index.md
@@ -0,0 +1,15 @@
+++
author = "Caleb Callaway"
date = 2016-01-29T12:35:25Z
description = ""
draft = false
slug = "about"
title = "About"

+++


A place on the Internet to brain-dump about various topics, mostly technical; social media doesn't give me as much control over the data as I'd like.

Much can be inferred about the nature of my interests from the information on [the homepage](/).

@@ -0,0 +1,20 @@
+++
author = "Caleb Callaway"
date = 2018-03-03T12:53:35Z
description = ""
draft = false
slug = "bdb0060-panic-fatal-region-error-detected-run-recovery"
title = "BDB0060 PANIC: fatal region error detected; run recovery"

+++


Recently I noticed the Github webhook that notified the Brainvitamins website of changes to my [resume](https://github.com/cqcallaw/resume) was bringing the site to its knees. Each time the webhook was triggered, the Apache error log was flooding with the following error:

> BDB0060 PANIC: fatal region error detected; run recovery
> BDB0060 PANIC: fatal region error detected; run recovery
> BDB0060 PANIC: fatal region error detected; run recovery
> [repeat ad infinitum until the server runs out of disk space]
Seems the recent Ghost upgrade corrupted the Apache installation somehow, because it was necessary to backup my Apache configuration files and purge the Apache installation (something akin to `sudo apt remove --purge apache2 && sudo apt --purge autoremove`) to resolve the issue. I found very little information about this error online; hopefully this post will help some other lost soul encountering a similar issue.

23 changes: 23 additions & 0 deletions content/blog/borderlands-3-in-proton.md
@@ -0,0 +1,23 @@
+++
author = "Caleb Callaway"
date = 2020-03-28T19:05:09Z
description = ""
draft = false
slug = "borderlands-3-in-proton"
title = "Borderlands 3 in Proton"

+++


Borderlands 3 recently became available through Steam, and I'm happy to report it plays quite well in Proton with the commonly available Media Foundation work-arounds are installed. My Nvidia GTX 1080 yields a respectible 50 FPS at 2560x1440 with "Badass" quality settings.

Out-of-the box, I noticed a lot of choppiness in the framerate which disappears after the first few minutes of gameplay, even with the lowest quality settings. This is consistent with shader cache warmup issues, so I configured a dedicated, peristent shader cache with [Steam launch options](https://support.steampowered.com/kb_article.php?ref=1040-JWMT-2947):

```
__GL_SHADER_DISK_CACHE='1' __GL_SHADER_DISK_CACHE_PATH='/home/caleb/tmp/nvidia/shaders/cache' __GL_SHADER_DISK_CACHE_SKIP_CLEANUP='1' %command%
```

My GPU doesn't share a power budget with the CPU, so I also configured the [performance CPU frequency governor](https://support.feralinteractive.com/en/mac-linux-games/shadowofthetombraider/faqs/cpu_governor/).

With the tweaks, the game itself is quite playable, though I still see some stutter in the benchmark mode that's not present when the benchmark runs in Windows. Benchmarking data is limited to the average FPS number, which makes quantifying the choppiness difficult. The statistic of interest for choppiness would be the *minimum* FPS, but I haven't found a tool for logging this data. Suggestions?

13 changes: 13 additions & 0 deletions content/blog/cinnamon-raisin-oatmeal.md
@@ -0,0 +1,13 @@
+++
author = "Caleb Callaway"
date = 2020-01-01T10:14:53Z
description = ""
draft = false
slug = "cinnamon-raisin-oatmeal"
title = "My Favorite Cinnamon Raisin Oatmeal Method"

+++


For me, cinnamon raisin oatmeal is the addition of cinnamon, sugar, and raisins to a basic oatmeal recipe. For oatmeal recipes that call for boiling water, I like to boil the water together with cinnamon sugar, so my oatmeal is cooked in what is effectively a light cinnamon simple syrup.

17 changes: 17 additions & 0 deletions content/blog/clear-ice.md
@@ -0,0 +1,17 @@
+++
author = "Caleb Callaway"
date = 2016-05-15T05:01:00Z
description = ""
draft = false
slug = "clear-ice"
title = "Clear Ice"

+++


After many weeks of experimentation with a variety of mechanisms for generating clear, pure ice, I impulse-bought the [Ice Chest](http://www.wintersmiths.com/collections/all/products/ice-chest). I haven't achieved the level of perfection seen in the product pictures, but the clarity of the ice is categorically superior to ordinary ice, and I recommend the product.

It's common knowledge that pure ice is beautiful and lasts longer, but one quality that I particularly enjoy is the taste: the directional freezing process has a very pronounced purifying effect, so I doubly recommend a directional freezing solution if your tap water has an unpleasant after-taste.

There's a lot of information about directional freezing on the internet, but verifying the efficacy of the process is quite simple: just fill an insulated vessel such as a insulated lunch box or vacuum flask half-full of water and leave it uncovered in the freezer for 24 hours. Don't fill the vessel completely or the expanding ice may cause deformation.

35 changes: 35 additions & 0 deletions content/blog/cold-brew-at-home-2.md
@@ -0,0 +1,35 @@
+++
author = "Caleb Callaway"
date = 2017-01-07T18:17:46Z
description = ""
draft = false
slug = "cold-brew-at-home-2"
title = "Cold Brew At Home"

+++


Over the past year, I've experimented extensively with cold brew at home, spending too much on equipment and gadgets. This post is a distillation of my learnings.

# Equipment
I was extremely dissatisfied with the Bruer device that shows up in a lot of search results; clean-up is easy, but setting the drip rate is fussy and repeatable brew results are almost impossible. Instead, I recommend the [OXO cold brew tower](https://www.oxo.com/cold-brew-coffee-maker). $50 is not bank-breaking, and the hassle-free clean up is well worth it.

A kitchen scale is a requirement as well; I'm reasonably satisfied with [OXO's 5-pound scale](https://www.oxo.com/products/preparing/measuring/5lb-food-scale-w-pull-out-display#black), but I find myself wanting a higher precision readout when I'm mixing drinks.

If you want to grind your own beans, a good burr grinder is worth investigating. A medium grind setting seems to work well.

# Coffee Selection
The number of sources, blends, roasts, etc. can be overwhelming; if you don't know what coffee to use, start with a medium roast house blend, then experiment.

# Cold Brew Mocha Recipe

* 1 oz. chocolate syrup (I use and heartily recommend Torani's [Dark Chocolate Sauce](http://shop.torani.com/Dark-Chocolate-Sauce/p/TOR-780001&c=Torani@Sauces))
* .5 oz. heavy whipping cream
* 5.5 oz. 2% milk
* 2 oz. water
* 3 oz. cold brew concentrate

Blend ingredients together thoroughly in a blender, and serve over ice (I use [ice balls](https://www.brainvitamins.net/blog/clear-ice/)).

The quantities might seem strange, but are designed to sum up to 12 oz. The ratios of milk and water can be tweaked to taste, but I find more than 1/2 an ounce of cream makes the drink too rich, and less than 5 oz of milk makes the drink more watery than I like.

42 changes: 42 additions & 0 deletions content/blog/cold-brew-recipes.md
@@ -0,0 +1,42 @@
+++
author = "Caleb Callaway"
date = 2019-01-09T08:38:20Z
description = ""
draft = false
slug = "cold-brew-recipes"
title = "Cold Brew Recipes"

+++


This post builds on the basic information in [my previous cold brew post](https://www.brainvitamins.net/blog/cold-brew-at-home-2/) with more recipes and preparation ideas.

# With Cream and Sugar
* 2 oz. cold brew concentrate
* 6 oz. water
* 1/2 oz. simple syrup
* 1/2 oz. heavy or whipping cream

For a hot beverage, heat everything except the cream to about 160° F, then add the cream and enjoy. For a cold beverage, mix everything together with ice.

## Extra Creamy
For an extra creamy cup, substitute unsweetened almond milk for water in the "Cream and Sugar" recipe. I don't enjoy the flavor of hot almond milk, so I prefer to put this variant on ice.

# Caffe Latte
The essential structure of the drink is cold brew concentrate (replacing the espresso shot in a traditional latte), sweetener, flavorings, and frothed milk. These ratios work well for me:

* 2 oz. cold brew concentrate
* One of the following flavor options:
* 1/2 oz. simple syrup with 1/2 teaspoon vanilla or hazelnut extract
* -OR- 1/2 oz. flavored syrup (e.g. Torani or Monin)
* 6 oz. frothed milk

Any milk that can be frothed should work. Cow's milk is a classic; almond milk also works well for cold beverages. If the almond milk contains sweetener, reduce the added sweetener as necessary. I highly recommend the [Breville Milk Cafe](https://www.breville.com/us/en/products/coffee/bmf600.html) for frothing milk; the cheap, hand-held whisks are too messy, and steamer wands are usually attached to bulky espresso machines.

For a hot beverage, heat everything except the milk in the microwave for about 45 seconds; I aim for just over 160° F, measured with a temperature gun. Froth the milk, then pour the frothed milk into the flavored hot coffee concentrate.

For a cold beverage, skip the heating step and add ice at the end.

## Blended Caffe Latte
With the Milk Cafe, one can froth the cold brew and flavorings together with the milk; the result is a light, coffee-flavored milk froth that can be enjoyed hot or cold. One could probably get a similar effect with a blender.

79 changes: 79 additions & 0 deletions content/blog/comparing-confidence-intervals.md
@@ -0,0 +1,79 @@
+++
author = "Caleb Callaway"
date = 2020-08-21T06:34:15Z
description = ""
draft = false
slug = "comparing-confidence-intervals"
title = "Benchmark Confidence Interval Part 2: Comparison"

+++


Benchmark data generally isn't interesting in isolation; once we have one data set, we usually gather a second set of data against which the first is compared. Reporting the second result as a percentage of the first result isn't sufficient if we're rigorous and report results with [confidence intervals](https://www.brainvitamins.net/blog/confidence-intervals-for-benchmarks/); we need a more nuanced approach.

Let's suppose we run a benchmark 5 times and record the results, then fix a performance bug and gather a second set of data to measure the improvement. The best intuition about performance gains is given by scores and confidence intervals that are [normalized](https://en.wikipedia.org/wiki/Normalization_(statistics)) using our baseline geomean score:

<table>
<tr>
<th></th>
<th>Geomean Score</th>
<th>95% Confidence Interval</th>
<th>Normalized Score</th>
<th>Normalized CI</th>
</tr>
<tr>
<th>Baseline</th>
<td>74.58</td>
<td>1.41</td>
<td>100.00%</td>
<td>1.88%</td>
</tr>
<tr>
<th>Fix</th>
<td>77.76</td>
<td>2.92</td>
<td>104.26%</td>
<td>3.91%</td>
</tr>
</table>

All normalization is done using the _same baseline_, even the bug fix confidence interval. One can work out the normalized confidence intervals for a baseline score of `100 +/- 1` and a second score of `2 +/- 1` to see why this must be so.

Now, let's visualize (using a LibreOffice Calc chart with custom [Y error bars](https://help.libreoffice.org/3.3/Chart/Y_Error_Bars)):

![ci-comparison-v1-1](/blog/content/images/2020/08/ci-comparison-v1-1.png)

Woops! The confidence intervals overlap; something's wrong here. We can't be confident our performance optimization will reliably improve the performance of the benchmark unless 95% of our new results fall outside 95% of our old results. Something is dragging down our score and we cannot confidently reject our [null hypothesis](https://en.wikipedia.org/wiki/Null_hypothesis).

The root causes for such negative results are rich and diverse, but for illustrative purposes, let's suppose we missed an edge case in our performance optimization that interacted badly with a power management algorithm. Our intrepid product team has fixed this issue, and now we have:

<table>
<tr>
<th></th>
<th>Geomean Score</th>
<th>95% Confidence Interval</th>
<th>Normalized Score</th>
<th>Normalized CI</th>
</tr>
<tr>
<th>Baseline</th>
<td>74.58</td>
<td>1.41</td>
<td>100.00%</td>
<td>1.88%</td>
</tr>
<tr>
<th>2nd Fix</th>
<td>80.18</td>
<td>1.63</td>
<td>107.51%</td>
<td>2.18%</td>
</tr>
</table>

![ci-comparison-v2](/blog/content/images/2020/08/ci-comparison-v2.png)

Much better; we can confidently reject the null hypothesis and assert that our latest fix has indeed improved performance of this benchmark.

_Many thanks to Felix Degrood for his help in developing my understanding of these concepts and tools_

40 changes: 40 additions & 0 deletions content/blog/complex-type-syntax.md
@@ -0,0 +1,40 @@
+++
author = "Caleb Callaway"
date = 2016-04-25T01:23:42Z
description = ""
draft = false
slug = "complex-type-syntax"
title = "New Complex Type Syntax"

+++


As part of the on-going build-out of recursive types in newt, complex types have been re-worked such that every complex type is a dictionary of type declarations (previously, record types were a dictionary of _values_, with special logic to generate modified copies of this dictionary). In this new model, type declarations that reference existing types are implemented as type _aliases_. Thus, in the following type declaration, `person.age` is an alias for `int`, `person.name` is an un-aliased record type, and `person.name.first` and `person.name.last` both alias the built-in type `string`.

```
person {
age:int,
name {
first:string,
last:string
}
}
```

For purposes of assignment and conversion, a type alias is directly equivalent to the type it aliases.

The `struct` keyword is noteworthily absent from the previous record type definition, and there are now commas separating type members. These are not accidents, as the re-worked type declarations allow for arbitrarily nested type definitions, and repeated use of the `struct` and `sum` keywords felt heavy and inelegant. For this (primarily aesthetic) reason, the keywords are omitted from the nested types, and to maintain a uniform, non-astonishing grammar, the keyword is omitted from the top-level complex type declarations as well.

Omission of the keywords requires another mechanism for differentiating sum and product types, however, so members of record types must now be comma-delimited, while sum type variants are delimited by a vertical bar (that is, a pipe). In this new syntax, a linked list of integers might be expressed as follows:

```
list {
end
| item {
data:int,
next:list
}
}
```
The new syntax very closely matches the [proposed syntax for map literals](https://github.com/cqcallaw/newt/issues/11), which is a nice isomorphism, but does raise concerns about legibility issues. Time will tell.

25 changes: 25 additions & 0 deletions content/blog/confidence-intervals-for-benchmarks.md
@@ -0,0 +1,25 @@
+++
author = "Caleb Callaway"
date = 2020-08-19T04:03:10Z
description = ""
draft = false
slug = "confidence-intervals-for-benchmarks"
title = "Confidence Intervals for Benchmarks"

+++


When benchmarking, [confidence intervals](https://www.mathsisfun.com/data/confidence-interval.html) are a standard tool that give us a reliable measure of how much run-to-run variation occurs for a given workload. For example, if I run several iterations of the Bioshock benchmark and score each iteration by recording the average FPS, I might report Bioshock’s average (or [geomean](https://medium.com/@JLMC/understanding-three-simple-statistics-for-data-visualizations-2619dbb3677a)) score as `74.74` FPS with a 99% confidence interval of `0.10`. By reporting this result, I'm predicting that that 99% of Bioshock scores on this platform configuration will fall between 74.64 and 74.84 FPS.

Unless otherwise noted, confidence intervals assume the data is normally distributed:

[![QuaintTidyCockatiel-size_restricted](/blog/content/images/2020/08/QuaintTidyCockatiel-size_restricted.gif)](https://gfycat.com/quainttidycockatiel)

Each pebble in the demonstration represents one benchmark result. Our normal curve may be thiner and taller (or shorter and wider), but the basic shape is the same; most of the results will cluster around the mean, with a few outliers.

Normally distributed data means our 95% confidence interval will be smaller than our 99% confidence interval; 95% of the results will be clustered more closely around the mean value. If our 99% confidence interval is `[74.64, 74.84]`, our 95% interval might be `+/- 0.06`, or `[74.67, 74.80]`. The 100% confidence interval is always `[-infinity, +infinity]`; we’re 100% confident that every measured result will fall somewhere on the number line.

Computing the averages of averages is not always statistically sound, so it may seem incorrect to take the average FPS from each iteration of a benchmark and average them together. In this case we can confidently say that each average has [equal weight](https://math.stackexchange.com/questions/95909/why-is-an-average-of-an-average-usually-incorrect/95912#95912); if not, we need a different benchmark!

Next: [Comparing Benchmark Results](https://www.brainvitamins.net/blog/comparing-confidence-intervals/)

15 changes: 15 additions & 0 deletions content/blog/dissension.md
@@ -0,0 +1,15 @@
+++
author = "Caleb Callaway"
date = 2016-04-08T06:59:54Z
description = ""
draft = false
slug = "dissension"
title = "Dissension"

+++


Via BoingBoing, I recently encountered an [interesting read](http://www.theguardian.com/society/2016/apr/07/the-sugar-conspiracy-robert-lustig-john-yudkin) about how the scientific consensus about diet was influenced by decided unscientific means. Ironically, the article is published by an organization that is at least roughly speaking a newspaper, while explicitly mentioning that newspapers have a credibility problem. Informed individuals who would say Yudkin was a fraud may well exist; I suppose that readers will believe whatever they believe.

I personally find the narrative of corruptible science believable, which is why I think it dangerous to categorically dismiss dissenters from any scientific consensus as fools. Deniers of anthropogenic climate change may in fact be Yudkins, however small the probability might be.

0 comments on commit 675c080

Please sign in to comment.