Recent posts
FizzBuzz with Composition. In C.
Just for the heck of it. Nothing fancy, aside from mild cognitive dissonance.
Correction: The result was of course closer to composition than to continuations, as was pointed out to me by a colleague.
#include <stdio.h>
#include <string.h>
/* Copyright (c) 2014 Eugene Crosser. */
/* License: CC0 (http://creativecommons.org/choose/zero/) */
/* Toungue in cheek "continuation passing" implementation of FizzBuzz */
/* Inspired by this: */
/* http://themonadreader.files.wordpress.com/2014/04/fizzbuzz.pdf */
/* With Strings and garbage collection we would have used strings objects. */
/* As we don't want to muddle the code with memory managenemt, we just use */
/* static buffers here. Make them long enogh for the longest output. */
typedef struct {
char value[16];
char dflt[16];
} fbstate_t;
fbstate_t test(int what, int against, char *v, fbstate_t ost) {
fbstate_t nst;
if (what % against == 0) {
strncpy(nst.value, v, sizeof(nst.value));
strncat(nst.value, ost.value, sizeof(nst.value));
nst.dflt[0] = '\0';
} else {
nst = ost;
}
return nst;
}
fbstate_t dflt(int what) {
fbstate_t nst;
nst.value[0] = '\0';
snprintf(nst.dflt, sizeof(nst.dflt), "%d", what);
return nst;
}
fbstate_t final(fbstate_t ost) {
fbstate_t nst;
strncat(nst.value, nst.dflt, sizeof(nst.value));
nst.dflt[0] = '\0';
return nst;
}
fbstate_t run(int what) {
return final(
test(what, 3, "Fizz",
test(what, 5, "Buzz",
dflt(what))));
}
int main(int argc, char *argv[]) {
int i;
for (i = 1; i <= 100; i++) {
fbstate_t st = run(i);
printf("%s\n", st.value);
}
return 0;
}
Migrating to Hakyll
I tend to generally prefer static content on the web when possible, so wordpress was a source of irritation for me for some time. After moving the site to a different hosting provider, I decided that it’s time for a cleanup. Hakyll seemed to tick the right checkboxes, and offered another opportunity to play with haskell.
After some fiddling with exitwp, I noticed this fork which addressed hackyll’s specific idiosyncrasy to the tag names containing underscore, and allowed to preserve the document tree structure that I had for my permalinks (it was my goal to preserve permalinks).
This is what I came up with by now:
--------------------------------------------------------------------------------
{-# LANGUAGE OverloadedStrings #-}
import Data.Monoid (mappend)
import Hakyll
--------------------------------------------------------------------------------
main :: IO ()
main = hakyll $ do
match "images/*" $ do
route idRoute
compile copyFileCompiler
match "css/*" $ do
route idRoute
compile compressCssCompiler
match "about/index.markdown" $ do
route $ constRoute "about.html"
compile $ pandocCompiler
>>= loadAndApplyTemplate "templates/default.html" defaultContext
>>= relativizeUrls
match "posts/*/*/*/*" $ do
route $ gsubRoute "posts/" (const "") `composeRoutes`
gsubRoute ".markdown" (const "/index.html")
compile $ pandocCompiler
>>= saveSnapshot "content"
>>= loadAndApplyTemplate "templates/post.html" postCtx
>>= loadAndApplyTemplate "templates/default.html" postCtx
>>= relativizeUrls
create ["archive.html"] $ do
route idRoute
compile $ do
let archiveCtx =
field "posts" (\_ -> postList recentFirst) `mappend`
constField "title" "Archives" `mappend`
defaultContext
makeItem ""
>>= loadAndApplyTemplate "templates/archive.html" archiveCtx
>>= loadAndApplyTemplate "templates/default.html" archiveCtx
>>= relativizeUrls
>>= removeIndexHtml
match "index.html" $ do
route idRoute
compile $ do
let indexCtx = field "posts" $ \_ ->
postListCont $ fmap (take 3) . recentFirst
getResourceBody
>>= applyAsTemplate indexCtx
>>= loadAndApplyTemplate "templates/default.html" postCtx
>>= relativizeUrls
>>= removeIndexHtml
match "templates/*" $ compile templateCompiler
create ["atom.xml"] $ do
route idRoute
compile $ do
let feedCtx = postCtx `mappend` bodyField "description"
posts <- feedList
renderAtom myFeedConfiguration feedCtx posts
create ["rss.xml"] $ do
route idRoute
compile $ do
let feedCtx = postCtx `mappend` bodyField "description"
posts <- feedList
renderRss myFeedConfiguration feedCtx posts
--------------------------------------------------------------------------------
postCtx :: Context String
postCtx =
dateField "date" "%B %e, %Y" `mappend`
defaultContext
--------------------------------------------------------------------------------
postList :: ([Item String] -> Compiler [Item String]) -> Compiler String
postList sortFilter = do
posts <- sortFilter =<< loadAll "posts/*/*/*/*"
itemTpl <- loadBody "templates/archive-item.html"
list <- applyTemplateList itemTpl postCtx posts
return list
--------------------------------------------------------------------------------
postListCont :: ([Item String] -> Compiler [Item String]) -> Compiler String
postListCont sortFilter = do
posts <- sortFilter =<< loadAllSnapshots "posts/*/*/*/*" "content"
itemTpl <- loadBody "templates/post-item.html"
list <- applyTemplateList itemTpl postCtx posts
return list
--------------------------------------------------------------------------------
--feedList :: ([Item String] -> Compiler [Item String]) -> Compiler String
feedList = fmap (take 10) . recentFirst
=<< loadAllSnapshots "posts/*/*/*/*" "content"
--------------------------------------------------------------------------------
removeIndexHtml :: Item String -> Compiler (Item String)
removeIndexHtml item = return $ fmap cuttail item
where
cuttail = withUrls $ replaceAll "/index.html" (const "/")
--------------------------------------------------------------------------------
myFeedConfiguration :: FeedConfiguration
myFeedConfiguration = FeedConfiguration
{ feedTitle = "Average Blog"
, feedDescription = "Random Ramblings"
, feedAuthorName = "Eugene Crosser"
, feedAuthorEmail = "crosser@average.org"
, feedRoot = "http://www.average.org/blog/"
}
There is a thing that I’ll need to address at some point: I’d prefer to keep the markdown source in a flat directory, and the posts be put in the tree based on the posting date. In the existing code, the route to the article is derived directly from the path to the markdown source.
Monitoring with Event Correleation
Maybe this is an obvious and/or well known thing for the experts in the field, but I did not realize how to do event correlation properly, and that was one of the reasons why I did not do it at all in NetWatcher.
Now I know.
What is event correlation? Imagine that you are monitoring the responsiveness of the web server, disk space, load average and “pingability” on a machine. If this machine is disconnected from the network or crashes, without event correlation, you will get four alarms, one for each of the monitored attributes. You don’t want that, you want to know just that the machine is down (“unpingable”), the rest is unhelpful noise. To avoid superfluous notifications, you need to arrange the monitored attributes into a dependency tree, and if some attribute becomes “failed”, suppress notifications about the failures of its dependent attributes. Quite simple, and, yes, the term “event correlation” is misleading, but never mind that.
My monitoring tool reports status change immediately when the probing completes, and different attributes are probed independently and in parallel. I could check if any of the upstream dependencies of an attribute are in “failed” state before reporting, but it is quite probable that after a failure, a dependent attribute will be probed earlier than its dependency, and be reported nevertheless.
And here, at last, is the solution to this problem:
When we notice status change of an attribute that has dependencies, queue the report without sending it. When we have “success” of a probe of an attribute that has dependants, and the previous status was “success” as well, send the queued reports of the dependants; if the combination of the current and previous statuses is different, discard queued reports. State changes of an attribute that has no dependencies are reported right away without queueing.
When there is more than one level of dependencies the scheme becomes only a little bit more complicated: when you need to release the queued reports, you don’t send them but rather re-queue them up to your upstream dependency.
And that’s it.
…or you can find more in the archives.