Why Rust for a Portfolio?

I built this portfolio in Rust because I wanted to learn, not because it was the smart choice.

Spoiler: it wasn't the smart choice. If productivity was the goal, I'd have shipped a Next.js site in a weekend and gone back to playing Expedition 33.

Here's the thing: I'm a beginner. Full greenhorn in Rust, Kubernetes, Podman, and honestly, development in general. My first portfolio (V1) was a quick static site I threw together for job hunting. It worked. The job negotiations went well.

And then I had a choice. Keep the boring thing that works, or use the breathing room to actually learn something.

I picked the hard road. This post is about what I found at the end of it.

We'll cover the theme system (3 themes × 2 color modes - harder than it sounds), a build-time blog engine with bilingual support, a contact form with honeypot traps and IP hashing, SSR with WASM hydration, and security hardening that scored Grade A on an external pentest.

None of this was necessary. All of it was worth it.

The Research Phase

Before writing a single line of code, I did my homework. Not just reading official docs - I wanted to know what real people were saying in late 2025.

Why Rust at All?

Let's be honest: if I wanted to ship fast, I'd use Next.js or SvelteKit. Both have massive ecosystems and would've gotten me a working portfolio in days, not weeks.

But I noticed something interesting while researching - even Next.js is adopting Rust under the hood for their new Turbopack bundler. It seemed like the industry was moving toward Rust for performance-critical stuff. I figured: why not learn the language everyone's building their tools with?

The trade-off seemed clear to me: comfort vs. learning. I chose learning.

Picking a Frontend Framework

Rust has a few frontend options. I looked at Yew, Dioxus, and Leptos.

From what I gathered, Yew is the most mature - it's been around longer and has a bigger community. Dioxus seemed focused on cross-platform stuff (desktop, mobile, web). Leptos was newer but people kept saying good things about its SSR support.

What drew me to Leptos was how it handles updates. Instead of re-rendering whole components like React does, it uses something called "fine-grained reactivity" - updating only the specific parts of the page that actually changed. I'm not going to pretend I fully understood the technical details, but the benchmarks I found suggested it was fast. And for a portfolio that needs to show up in search results, the server-side rendering support seemed important.

Picking a Backend Framework

For the backend, I looked at Actix Web and Axum. Both seemed production-ready.

From what I read, Actix is faster in raw benchmarks. But Axum is built by the same team that makes Tokio (the async runtime most Rust web stuff uses), so everything fits together more naturally. The middleware system made more sense to me when I was reading examples. Also, fewer macros. My brain only has so many #[derive] slots available.

The Decision

I'm not going to pretend I found the objectively "best" stack. I read forum posts, scrolled through GitHub issues, watched developers argue on Hacker News.

Leptos + Axum felt like the right fit for what I wanted to build and what I wanted to learn. That was enough for me.

The Toolkit

Before diving into code, here's what I actually used to build this thing.

Linux as Daily Driver

I run Linux on my main machine. Not because I'm some kind of purist, but because it's where the tools live. Docker, Kubernetes, Rust toolchains - everything just works better when you're not fighting compatibility layers. Plus, when your deployment target is Linux containers, developing on Linux means fewer surprises.

Podman over Docker

I went with Podman instead of Docker. Two reasons: it's daemonless (no background service eating resources), and it runs rootless by default. As someone paranoid about security, not giving my container runtime root access felt right. The commands are almost identical to Docker, so the learning curve was gentle.

Kubectl and the K8s Learning Curve

Kubernetes has a reputation for being complex. That reputation is earned. But kubectl became my friend. I spent a lot of time running kubectl describe pod and kubectl logs trying to figure out why things weren't working.

dreams:
  format: yaml
  isAflex: false

Plane for Project Management

I run a self-hosted instance of Plane - it's like Jira but open source and doesn't make me want to cry. Every task, every bug, every "I'll fix this later" got tracked. Cycles helped me stay focused, and having the full project history saved my sanity more than once. When I came back to the project after a break, I could see exactly where I left off.

Claude Code

This one's a bit meta, but I used Claude Code throughout the project. Not as a "write my code for me" button, but as a partner.

When I was stuck on Leptos reactivity concepts, it helped me understand. When I had a bug I couldn't see, we debugged together. When I needed to make architecture decisions, I bounced ideas off it. It reviewed my code, caught issues I missed, and honestly made me a faster learner than I would've been alone.

As a beginner, having something that could explain why things work (not just what to type) made a real difference. Pair programming without needing to find a human at 2 AM.

Theme System: 3 Themes × 2 Color Modes

I wanted three distinct visual themes (Terminal, Forge, Bitart) and each one needed to work in both dark and light mode. That's six combinations. It sounded simple until I started building it.

The Architecture

Leptos uses something called signals for reactivity. Think of them as variables that automatically update the UI when they change. The theme state lives in a context that any component can access:

// src/themes/mod.rs
#[derive(Clone, Copy)]
pub struct ThemeContext {
    pub theme: RwSignal<Theme>,
}

pub fn provide_theme_context() {
    let initial_theme = load_theme_from_storage();
    let theme = RwSignal::new(initial_theme);

    // React to theme changes
    Effect::new(move || {
        let t = theme.get();
        save_theme_to_storage(t);
        apply_theme_to_document(t);
    });

    provide_context(ThemeContext { theme });
}

The Effect runs whenever the signal changes - it saves to localStorage and updates the document. Switching themes is just one line:

theme_ctx.theme.update(|t| *t = t.next());

The localStorage Dance

Here's where it got tricky. Leptos runs on both server and client - server-side rendering first, then the client "hydrates" and takes over. But localStorage only exists in the browser. So I had to wrap all the storage code in feature flags:

fn load_theme_from_storage() -> Theme {
    #[cfg(feature = "hydrate")]
    {
        if let Some(window) = web_sys::window() {
            if let Ok(Some(storage)) = window.local_storage() {
                if let Ok(Some(theme)) = storage.get_item("theme") {
                    return Theme::from_code(&theme);
                }
            }
        }
    }
    Theme::default()
}

On the server, it just returns the default. On the client, it checks localStorage. Took me a while to wrap my head around this dual-world thing.

The CSS Variable Explosion

The real complexity wasn't in Rust - it was in CSS. Each theme needed its own set of colors, fonts, and spacing. Each color mode needed variants. I ended up with a lot of CSS variables:

[data-theme="terminal"] {
    --font-primary: 'JetBrains Mono', monospace;
    --color-bg: #1a1a1a;
    --color-text: #e0e0e0;
    /* ... dozens more */
}

Multiply that by three themes and two color modes. My SCSS file started looking like the control panel of the Millennium Falcon.

It works, but I won't pretend it's elegant. If I did it again, I'd probably look into CSS-in-Rust solutions. Live and learn.

Blog Engine: Build-Time Markdown

I needed a blog, and I wanted it bilingual - English and French in every post. I also didn't want to ship a markdown parser to the browser.

Why Build-Time?

The idea is simple: parse all the markdown files when you compile the app, not when someone visits the page. The HTML is ready to go, baked right into the binary. No runtime parsing, no loading spinners, instant content.

Rust has a neat feature called build.rs - a script that runs before compilation. That's where the magic happens.

The Bilingual Trick

I didn't want to maintain two separate files for each post. So I put both languages in one markdown file with simple markers:

---
slug: "my-post"
title_en: "My Post"
title_fr: "Mon Article"
---

[EN marker]
English content here...

[FR marker]
French content here...

The build script splits them apart using HTML comment markers:

// build.rs
fn split_bilingual_content(content: &str) -> (String, String) {
    let parts: Vec<&str> = content.split("[FR marker]").collect();

    let en_part = parts
        .first()
        .map(|s| s.replace("[EN marker]", "").trim().to_string())
        .unwrap_or_default();

    let fr_part = parts
        .get(1)
        .map(|s| s.trim().to_string())
        .unwrap_or_default();

    (en_part, fr_part)
}

Simple string splitting. Nothing fancy, but it works.

Syntax Highlighting

For code blocks, I used a crate called syntect. It parses code and wraps tokens in CSS classes that I can style per theme. The highlighting happens at build time too - just CSS classes in the HTML, no JavaScript highlighter needed.

fn highlight_code(code: &str, lang: &str, ss: &SyntaxSet) -> String {
    let syntax = ss
        .find_syntax_by_token(lang)
        .unwrap_or_else(|| ss.find_syntax_plain_text());

    let mut generator = ClassedHTMLGenerator::new_with_class_style(
        syntax, ss, syntect::html::ClassStyle::Spaced
    );

    for line in LinesWithEndings::from(code) {
        let _ = generator.parse_html_for_line_which_includes_newline(line);
    }

    generator.finalize()
}

The output is a static array of blog posts compiled directly into the Rust code. The compiler yells at me if my frontmatter has a typo. Harsh, but fair.

The Trade-off

Every time I write a new post, I need to rebuild the whole app. For a personal blog with a handful of posts, that's fine. For a high-volume site, you'd want something different.

But here's the other thing: build-time means no CMS, which means no authentication system. And honestly? That's a feature right now, not a bug. I'm still working on auth for another project, and I'm not going to overestimate my skills in setting up custom open source authentication on a public website until I'm confident in the security.

I'm not trying to build a Death Star here, just so some lousy rebel can one-shot my exhaust port.

Know your constraints. Know your skill level. Ship what you can secure.

Contact Form: Security First

A contact form seems simple. Name, email, message, submit. But it's also a door into your system, and bots love doors.

Never Store Raw IPs

I wanted to rate-limit by IP, but storing raw IP addresses felt wrong - privacy-wise and legally. So I hash them:

let salt = std::env::var("CONTACT_SALT")
    .unwrap_or_else(|_| "default-salt".to_string());
let mut hasher = Sha256::new();
hasher.update(ip.as_bytes());
hasher.update(salt.as_bytes());
let ip_hash = format!("{:x}", hasher.finalize());

Same IP always produces the same hash (so rate limiting works), but I can't reverse it to get the actual address. The salt lives in an environment variable, not in code.

Rate Limiting in Layers

One layer isn't enough. I ended up with three:

  1. Application level: 3 submissions per hour per IP hash
  2. Ingress level: nginx rate limiting (10 requests/second)
  3. Infrastructure level: Fail2ban watching the logs

If someone's really determined, they'll hit all three walls. Most bots give up after the first.

Input Validation

One thing I learned: always check for newlines in email fields. It's an old trick called header injection - attackers try to sneak extra email headers into your SMTP call. Simple fix:

if email.contains('\n') || email.contains('\r') {
    return Ok(ContactResult {
        success: false,
        message_key: "contact_error_email".to_string(),
    });
}

Paranoid? Maybe. But it costs nothing to check.

Oh, and there's a hidden surprise for bots masquerading as Winnie the Pooh. I won't say more than that.

SSR + Hydration: The Dual Build

Leptos does something clever: it renders HTML on the server for fast initial load and SEO, then the browser downloads WASM and "hydrates" the page to make it interactive. Best of both worlds.

In practice, this means building your app twice - once for the server (native Rust), once for the client (WebAssembly). Cargo-leptos handles this, but it took me a while to understand what was actually happening.

The CSP Puzzle

Content Security Policy headers tell the browser what scripts are allowed to run. You want them tight - no inline scripts unless explicitly trusted. Leptos injects scripts for hydration, so I needed to use nonces (one-time tokens):

// src/app.rs
#[component]
fn CspHeader() -> impl IntoView {
    #[cfg(feature = "ssr")]
    {
        if let Some(nonce) = use_nonce() {
            let csp = format!(
                "default-src 'self'; \
                 script-src 'self' 'wasm-unsafe-eval' 'nonce-{}'; \
                 style-src 'self' 'unsafe-inline'; \
                 frame-ancestors 'none'",
                nonce
            );

            if let Some(response_options) = use_context::<ResponseOptions>() {
                response_options.insert_header(
                    HeaderName::from_static("content-security-policy"),
                    HeaderValue::from_str(&csp).unwrap(),
                );
            }
        }
    }
    ()  // No visual output
}

The wasm-unsafe-eval part is necessary because WASM needs to, well, evaluate code. It's a controlled exception.

The Postgres War Story

This isn't strictly about SSR, but it happened while deploying and I wasted days on it.

I was running Kubernetes on my homelab. PostgreSQL kept crashing with a cryptic mount propagation error. I tried everything - checked NFS settings, tested with busybox containers, read through GitHub issues at 2 AM.

Turns out: postgres:18-alpine is incompatible with containerd 2.0.5. Something about the container image's filesystem layout breaks during init. The fix? Downgrade to postgres:16-alpine.

Two days of debugging. One version number change. Losing days over one number made me wish for a trip to the Event Horizon and say hi to Sam Neill.

That's Kubernetes for you. When it works, it's magic. When it doesn't, you're reading container runtime changelogs at midnight.

Security Hardening: The Full Stack

I'm a beginner. That means I'm also paranoid - I don't trust my own code yet. So I layered defenses everywhere I could.

Security Headers

The server adds headers to every response:

  • HSTS: Forces HTTPS for a year, including subdomains
  • X-Frame-Options: Blocks embedding in iframes (no clickjacking)
  • X-Content-Type-Options: Prevents MIME-type sniffing
  • Referrer-Policy: Controls what info gets sent to other sites
  • CSP: Restricts what scripts, styles, and connections are allowed

Most of these are one-liners in the Axum middleware. There's no excuse not to have them.

Container Hardening

The app runs in Kubernetes, and I locked down the container:

  • Runs as non-root user (uid 1000)
  • Read-only filesystem
  • All Linux capabilities dropped
  • Seccomp profile enabled

If someone somehow got code execution inside the container, they'd find themselves in a very boring sandbox with almost no permissions.

Network Policies

By default, pods in Kubernetes can talk to each other freely. I changed that:

  • Default deny all ingress traffic
  • Explicit allow only from nginx-ingress to the portfolio pod
  • Explicit allow only from portfolio to postgres
  • Controlled egress for DNS, HTTPS, and SMTP

Everything else gets blocked. If a pod gets compromised, it can't easily pivot to other services.

The Pentest

I ran an external penetration test against the live site. Not going to share the methodology or exact results (that would defeat the purpose), but I'll say this: Grade A on security headers, and the OWASP Top 10 vulnerabilities came back clean.

There were some findings at the infrastructure level - TLS configuration on nginx-ingress that needed tightening. Fixed now. Security isn't a checkbox, it's ongoing maintenance.

The point isn't that the site is unhackable. Nothing is. The point is making it annoying enough that attackers move on to easier targets... please.

What I Learned

I started this project as a beginner in Rust, Kubernetes, and web development. I'm still a beginner. But I'm a beginner who shipped something.

Theme systems are genuinely hard

I thought "three themes with dark mode" would take a weekend. It took weeks. The CSS alone is hundreds of lines. If you're planning something similar, budget more time than you think.

Security-first mindset pays off

Baking security in from day one is easier than bolting it on later. Every feature I added, I asked: "How could this be abused?" It slowed me down, but the pentest results were worth it.

Trying hard things is fun

There were moments of frustration. The Postgres debugging saga. The SSR hydration confusion. The CSS variable explosion. But I was learning the whole time, and that made it fun.

I picked a road and walked it. At the end, I found a working portfolio, a lot of new knowledge, and a weird appreciation for container runtimes.

Would I do it again?

Honestly? Yes. Not because Rust + Leptos is the "right" choice for a portfolio - it probably isn't. But because I learned more in this project than I would have in a year of tutorials.

If you're curious about Rust for web development, I'd say: try it. Pick something small. Accept that it'll take longer. Enjoy the ride.


The site is live at kovacs.pilgrim.ovh. The source code is on GitHub. If you want to try Leptos yourself, the official book is a great starting point.

Thanks for reading.