How I Built This Site
From a domain and an idea to a functional personal blog
This is my first proper post on web.heckinchonkeires.me. It’s the story of how this site came to be, and the things I learned and mistakes I made along the way. I think it will serve as a helpful explanation of the hows and whys of this website, as well as a good example of the kind of writing I intend to post here.
I’ve had a personal website on my mind for a long time. I experimented with some free web application hosts in the past, but their offerings didn't seem appropriate for what I wanted to do here. I finally bought heckinchonkeires.me this year, and I started doing some work on turning an old PC into a web server. I still intend to finish that project, but it dawned on me that it was going to take a while. I’m trying to start a career as a software developer (hire me please), and while I expect knowing how to operate a small web server would be an asset, I need to prioritize. If learning all that is going to be of any use to my career, I need a way to show potential employers what I’ve learned. So that’s where this website is starting from; a place to publicly document my software projects. In addition to benefiting my career, this site serves as an incentive to document my work as I go. And turning my collection of comments, notes, and discarded test scripts into something comprehensible to somebody else helps me better understand and remember what I learned.
I had a domain and a decent idea of what I wanted to do with it. Before I found my current hosting solution, I had done some research into free blog options, but they all seemed to be aimed at non-coders. Wordpress.com and the like are fine if you just want an easy way to get words onto the internet, but they limit what somebody with web development skills can do with them. Last week I was thinking about changing git hosts, and Codeberg came to mind. I looked around their website and I came across codeberg.page. That page had three steps I understood, it was free, and they even allowed custom domains. I was already convinced to use Codeberg as a Git host, and Codeberg Pages also solved my web hosting problem. It only hosts static sites, so I can't do everything I want to do with a personal website, but it gives me more than enough control for my immediate goals. For the time being, I just want to host text content somewhere on the open web, and now I can do that and more.
Getting anything at all on the internet was solved, but I still needed a way to make something presentable in a timely manner. I came across this tutorial, and it was very helpful. Not only did following it get a custom web page online quickly, it made me realize that what I needed was a static site generator. I chose Eleventy because the author used it in the tutorial, and it turned out to be a great choice. I also really appreciate the included deploy-pages shell script, and I’m still using a modified version of it.
With a domain, a web host, and a site generator, I still needed to put a site together. I'm sure I could have built a whole blog from scratch using Eleventy, but it would have taken significantly longer and probably not looked great. So I went looking for a better starting point and quickly found the Eleventy Base Blog starter project linked in the Eleventy documentation. It was perfect. That project would give me a functional blog in little time, and from what I had learned about Eleventy, it seemed like it would be easy to build on that base. On top of that, the Base Blog repository is well documented, so I feel confident that figuring out what I need to change when the time comes won't be a problem.
So I learned my way around the starter project and got a copy of it online under my own domain. At that point, I decided I wanted to limit the Git repository for the actual website to the rendered content and keep the generator itself in a separate repository. Changing the output directory for Eleventy was easy. I also wanted to keep using the deploy-pages script to save some typing every time I updated the site, but it didn't quite work for my new use case. I didn’t know much about shell scripting at the time; I understood where I needed to make the script change directories and why that was necessary, but parts of it that were beyond me kept causing errors. I knew enough to tell which parts were essential, so I removed the error-prone sections until it worked. My major addition was some error handling logic for a Codeberg bug that requires re-running git push. I think the script is still overcomplicated for my use case, though it allows for some flexibility should I change things.
My addition to deploy-pages.sh
git push "$remote" "$remote_branch"
if [ $? -ne 0 ]; then
echo "Codeberg authentication failed. Retrying."
git push "$remote" "$remote_branch"
if [ $? -ne 0 ]; then
echo "Error: Retry failed. Bother Codeberg about it."
exit 1
fi
fi
Once I had things organized so that I could easily push any updates I made to the live site, I started working on building the starter project into something I would feel comfortable calling my own. I started in the root directory, deleting some files that were there to facilitate services I didn’t plan on using. I updated the package.json file with details about myself and the project. I made sure to write my own README, and then I turned my attention to the LICENSE file. I had done a good amount of reading about copyright and licenses and I settled on using CC0 for my writing. My research hadn’t focused on software, so I wasn’t sure what my options were regarding licensing my modifications to code with an existing license. I searched for information about this more specific case, though I could have just stopped at reading the MIT license included in the starter project. In short: I can do whatever I want as long as I include the same license in the project. That precluded me from using CC0 for the site generator project, but I found the MIT license permissive enough. So I’m going to license my modifications using MIT. Also, while not required by that license, I decided I would keep the original author’s copyright in the license file as a sign of my respect and appreciation for their work. I added the author of the deploy-pages script to the license file as well, and I intend to do the same for as much of the non-library code I use in my projects as is feasible.
Having made my version of the Eleventy Base Blog repository ready to publish under my own (pseudo)name, I started working on the content for the site. All that thinking about copyright made me prioritize adding a CC0 declaration to the site’s footer, and I added my explanation of CC0 to the about page. Creative Commons’ template for a CC0 footer included links to their CC0 page, and I elected to put one on my about page as well. Everything worked, but I didn’t like that the external links opened in the same tab. At the time I thought opening external links in a new tab by default was a good thing to do, but I'm reconsidering that now. I looked up the HTML attributes I needed to add, then immediately realized there was no way I was going to manually add those attributes to every external link on the site. I’m a programmer after all, so I’m always up for automating things, even if it might not be necessary. I was sure Eleventy would make applying the attributes I wanted easier, so I went back to its documentation. Transforms sounded like what I was looking for: I needed to automatically transform external links after all. At the time, I thought Eleventy made the content of each output page available as HTML text strings (or at least something similar) and I was already familiar with Fast HTML Parser, a JavaScript library for working with HTML. My misunderstanding was that the content Eleventy provides to transforms is the rendered output before it’s written to the output directory, or at least something much closer to that than what it actually is. I wrote my transform function to parse the content provided by Eleventy and add the necessary attributes to all external links. I got that function working relatively quickly, but it only worked on the links in the footer.
My first working transform function from eleventy.config.js
eleventyConfig.addTransform("external-link", function (content) {
if ((this.page.outputPath || "").endsWith(".html")) {
try {
const html = parse(content);
const links = html.getElementsByTagName("a");
// make sure we found links to transform
if (links && links.length > 0) {
for (let link of links) {
// avoid an error if the link somehow doesn't have an href
const lhref = link.getAttribute("href") || "";
// all external links should start with http
if (lhref.startsWith("http")) {
// avoid overwriting links with existing attributes
if (!link.getAttribute("target"))
link.setAttribute("target", "_blank")
if (!link.getAttribute("rel"))
link.setAttribute("rel", "noopener noreferrer")
}
};
return html.toString();
}
// no links found, return original content
return content;
} catch(error) {
// make sure to still return the original content if something goes wrong
console.error(error);
return content;
}
};
});
That was when I realized that what I was actually transforming was the content of the template and layout files before Eleventy starts rendering them. My transform only worked for the footer because it was defined in a Nunjucks template file, which partly used HTML syntax. It was getting late, so I left myself some sections in the transform function to fill in for handling external links in non-HTML formatted files. The next day I went completely off the rails. I spent most of it trying to write a general JavaScript function that could identify and modify links in arbitrary strings. The Linkify library helped, but the most it could do in this case was pick out every URL in a string. So Linkify saved me the identification part, but what it gave me was a list of URL strings and their start and end indices in the main content string. The problem perceived was that it found every URL in the main string. Including the ones inside comments. I now realize it would have been fine to skip filtering out URLs in comments. Comments don’t make it into Eleventy’s final render (by default at least), so there would be no harm in processing the links inside them. But. I identified a problem and I thought I knew how to solve it. I enjoy solving problems, especially when I don’t have a deadline, so I went for it. By the end of the day, I had written what I later realized was my own implementation of JavaScript’s String.lastIndexOf() method (despite consulting the full list of JavaScript string methods earlier). I could generate a list of all (non-commented) URL strings and I thought I had almost figured out how to find out if they were inside tags. I realized I was struggling, so I gave up and went to bed.
What's left of my first attempt to filter out commented strings
// Returns an array of the start and end indices of every instance of
// str in content that is not within a comment.
// Currently works for xml, nunjucks, and javascript comments.
function findUncommented(str, content){
const regex = RegExp(str, 'gd');
let found_array = [];
let search_array;
let last_index = 0;
// Regex.exec() provides a handy way to find all the instances of str
while((search_array = regex.exec(content)) !== null) {
const start_index = search_array.indices[0][0];
const end_index = search_array.indices[0][1];
const before_url = content.slice(last_index, start_index);
const after_url = content.slice(end_index);
let first_delim = null;
let first_delim_index = 0;
let last_delim = null;
let last_delim_index = after_url.length;
// STARTDELIMS and ENDDELIMS are constant arrays containing comment
// delimiters for xml, javascript, and nunjucks defined earlier in
// the file. They keep breaking my markdown, so I'm not including them
// here.
for (let delim of STARTDELIMS) {
const dl_index = before_url.indexOf(delim);
// we want the index of the delimiter closest to str
if (dl_index && dl_index > first_delim_index) {
first_delim_index = dl_index;
first_delim = delim;
}
};
// special case of single-line comments delimited by newlines
if (first_delim == "\\") {
last_delim = "\n";
// the first newline after str should always be the
// last delimiter in this case
last_delim_index = after_url.indexOf("\n");
} else {
// similar to start delimiters
for (let delim of ENDDELIMS) {
const dl_index = after_url.indexOf(delim);
if (dl_index && dl_index < last_delim_index) {
last_delim_index = dl_index;
last_delim = delim;
}
};
}
if (!first_delim || !last_delim) {
found_array.push(start_index);
found_array.push(end_index);
}
};
return found_array;
};
Based on what I learned to arrive at isCommented() below, this doesn't actually work. I thought it did at one point though.
The most functional thing I produced
I remembered
String.lastIndexOf() existed for this one.
// Returns true if the substring of str starting at start_index and
// ending at end_index is within a pair of comment delimiters.
// Currently works for xml, nunjucks, and javascript comments.
function isCommented(start_index, end_index, str){
const before = str.slice(0, start_index);
const after = str.slice(end_index);
let first_dl_index = 0;
let first_delim = null;
// Javascript has comments that are partial delimited by newlines,
// so we have to keep a special index for those.
let nl_dl_index = 0;
// STARTDELIMS and ENDDELIMS are constant arrays containing comment
// delimiters for xml, javascript, and nunjucks defined earlier in
// the file. They keep breaking my markdown, so I'm not including them
// here.
for (let delim of STARTDELIMS) {
const dl_index = before.lastIndexOf(delim);
if (delim == "//" && dl_index > nl_dl_index) {
nl_dl_index = dl_index;
} else if (dl_index > first_dl_index) {
first_dl_index = dl_index;
first_delim = delim;
}
};
let matched_delim;
// Pretty hacky way of avoiding finding the "//" in a url
if (nl_dl_index > first_dl_index && before[nl_dl_index - 1] != ":") {
// I don't think this ever happens with the content I'm using this with.
// To generalize this function further, I'd need to figure out how to
// actually tell when "//" delimits a comment.
first_delim = "//";
matched_delim = "\n";
} else {
// Matching delimiters should always work at this point.
// Better to be safe than sorry though.
matched_delim = ENDDELIMS[STARTDELIMS.indexOf(first_delim)] || null;
}
let last_delim = null;
if (after.indexOf(matched_delim) > 0)
last_delim = matched_delim;
if (first_delim && last_delim)
return true;
return false;
};
This worked great, but it still left me needing to find and modify the HTML tags
The next morning came with some critical clarity: I realized why transforms were the wrong tool for what I was trying to do. Over breakfast I returned to the Eleventy documentation, looking for some way to insert my code further into the rendering process. I found the events page and the eleventy.after event. It wasn't exactly what I wanted, but it did the job. (I was hoping to avoid re-writing to the output directory. eleventy.after feels like only a minor improvement over a standalone script that modifies the output content, since it makes accessing the necessary page attributes easier.) All I had to do was modify my original HTML-parsing function slightly and slip it into eleventy.after. I had that working less than 15 minutes after sitting down at my desk. I'm working on reining in this tendency towards creating XY problems for myself. That day spent barking up the wrong tree was fun, but I’d rather I got around to working on this post a day earlier.
The actual solution in eleventy.config.js
// make all external links open in a new tab
eleventyConfig.on(
"eleventy.after",
async ({ dir, results, runMode, outputMode }) => {
for (let result of results) {
if ((result.outputPath || "").endsWith(".html")) {
const html = parse(result.content)
const links = html.getElementsByTagName("a")
// make sure we found links to modify
if (links && links.length > 0) {
for (let link of links) {
// avoid an error if the link somehow doesn't have an href
const lhref = link.getAttribute("href") || ""
// by convention, all external links will start with http(s): or mailto:
// TODO: make a regex already
if (lhref.startsWith("http:") || lhref.startsWith("https:") || lhref.startsWith("mailto:")) {
// avoid overwriting links with existing attributes
if (!link.getAttribute("target"))
link.setAttribute("target", "_blank")
if (!link.getAttribute("rel"))
link.setAttribute("rel", "noopener noreferrer")
}
}
fs.writeFile(result.outputPath, html.toString(), (err) => {
if(err)
console.error(err)
})
}
}
}
}
)
So I have a functional blog and I have a post on it. If you’ve made it this far, dear reader: sorry, first of all. Secondly: you might be wondering what else you can expect from this site. I have plenty of wild ideas for things to do with a personal website, but I’m prioritizing building a career, so I’m going to implement mostly less wild ones. I’m trying to specialize in tools programming, and I have a lot of learning to do in the process. (I hardly know any C++ as of posting this, for example.) So I’m going to document that process, both to aid in my understanding and create a public record of my skills for potential employers. Maybe some people on a similar path to me will also benefit from my work here, but that would be a bonus. Outside of my chosen career path, I’m still very interested in web design and passionate about digital accessibility. I want to explore the limits of what’s possible with a static site, and I want to make this website as accessible as I can. Memorizing all of WCAG wouldn’t be much help in making the web better for everyone if I didn‘t have the knowledge and experience to implement and go beyond those guidelines. So expect writing about things I learn and projects I work on related to tools programming and web design in the near future. I also have interests unrelated to my career that I want to write about, but such writing would belong elsewhere. If you're still here and interested, watch this space. If you're really interested in anything I write here, you're welcome to send me an email about it. And if you're looking to hire a software developer, I'm available.