Tag: fandom

  • Tracking Love and Hate in Modern Fandoms, Part Two: Star Trek: Starfleet Academy

    I ironically found this image on a subreddit complaining about how this poster doesn’t look star trek enough.

    What I Learned Trying to Build a Scalable Dashboard Framework

    The goal of this phase of RewindOS was straightforward: design a scalable ingestion layer capable of tracking fandom signals in near-real time. In practice, that meant building a lightweight FastAPI service, backing it with PostgreSQL, and deploying it on Railway to validate whether the architecture could support future dashboards at low operational cost.

    As a concrete test case, I wanted to monitor Reddit discussions around Star Trek: Starfleet Academy—specifically post velocity, controversy spikes, and narrative drift across subreddits. The idea was to ingest posts and metadata, normalize them into Postgres, and surface them as cultural “signals” alongside IMDb and news data.

    What I didn’t fully appreciate—despite knowing it abstractly—was just how aggressively Reddit now enforces its platform boundaries.

    Reddit’s API and the End of Casual Scraping

    Reddit has effectively closed the door on nearly all casual or semi-legitimate scraping workflows. API access is tightly gated, application approval is slow and opaque, and even read-only endpoints are heavily rate-limited or restricted by OAuth scope. Anonymous access, once tolerated at low volumes, is now unreliable at best.

    Beyond the API, Reddit’s cybersecurity posture is formidable:

    • Bot detection at the edge (behavioral + fingerprinting)
    • Aggressive IP and ASN-based throttling
    • User-agent and request-pattern heuristics
    • Inconsistent but intentional JSON endpoint breakage

    In short: the platform is designed to detect intent, not just volume. Attempting to build a scraper to monitor Star Trek Academy posts wasn’t just blocked—it surfaced how mature Reddit’s anti-extraction infrastructure has become. I knew this intellectually, but putting it to the test made the reality clear: Reddit data is no longer a “free signal layer” for independent researchers.

    Infrastructure Takeaways: Railway and PostgreSQL

    On the infrastructure side, the experiment was still valuable. Railway proved viable for rapid prototyping: fast deploys, sane defaults, and painless Postgres provisioning. It’s not magic, but for early-stage dashboards and internal tooling, it removes a lot of friction.

    PostgreSQL held up exactly as expected—flexible enough to store raw ingestion data, structured metrics, and future-proofed schemas for signals that don’t fully exist yet. The limitation wasn’t the database or the API layer; it was upstream access to the data itself.

    The Bigger Lesson

    If RewindOS is going to measure cultural velocity, controversy, and fandom health, it can simply revert back to its original dashboards created using python and json. Though not real time it still paints a pretty picture about what’s going on within the fandom itself. Which is all we really care about.

    What This First Pass Shows

    This first iteration of the Starfleet Academy trackers focuses on where conversation actually happens and what kind of attention the show receives, rather than trying to simulate a live social feed.

    A few clear patterns emerge immediately:

    1. r/television captures cultural moments, not fandom

    The r/television tracker surfaces:

    • Trailers and first-look teasers
    • Series premiere threads- official discussions again did not occur on r/television until the second week

    Engagement here is highly concentrated:

    • One or two posts (trailers or premiere discussions) account for a large share of total comments.
    • Topical discussions related to the show is sparse and fragmented

    This tells us r/television is best read as a general-audience and industry sentiment signal, not a place where sustained episode-by-episode engagement lives. However, the moderators do seem to be doing a good job keep the sub from being bombarded by the larger cultural divisions within the Star Trek community related to this show as I will discuss below:

    The absence of strong megathreads is itself a useful signal: it shows where not to look for fandom intensity.

    2. Adding Star Trek–specific subreddits changes the signal shape

    Once r/startrek and r/DaystromInstitute are included:

    • Overall engagement increases substantially
    • Episode-level discussion becomes more structured
    • Conversations shift from “Is this good?” to “How does this fit into canon / science / continuity?”

    This confirms the need for separate dashboards rather than one blended feed.


    What this Tracker Looks At

    Star Trek Subreddits Tracker Focus

    The second tracker (r/startrek + r/DaystromInstitute) is intentionally different.

    It shows:

    • Sustained episode-to-episode discussion
    • Canon, timeline, and science-based analysis
    • Deep engagement that doesn’t show up in mainstream spaces

    r/DaystromInstitute, in particular, acts as a high-signal, low-noise environment:

    • Fewer posts
    • More thoughtful, analytical comments
    • Less reactionary churn

    This tracker is closer to a fandom depth and intellectual engagement index.

    In general it does appear that interest in the cultural hate of SFA at least on reddit has decreased and comments and engagements are ticking up.

    The show while currently sitting with a horrible 4.3 IMDB rating gets more engagement than other one off streaming shows and there is some discussion among fans who had watched the latest Sisko-themed episode of SFA but claim will not watch any of the others (unless I am assuming there are further ties to Star Trek canon – in their eyes.)

    It remains to be seen if there will be any changes to the Star Trek Starfleet Academy schedule, but despite the supposedly low rating these engagements may be what Paramount is looking for after all.

    If you want to see or run the Python script used for this analysis, the full repository is here:

    👉 GitHub:

    https://github.com/jjf3/rewindOS_SFA2_Television_Tracker

    https://github.com/jjf3/rewindOS_sfa_StarTrekSub_Tracker

  • Tracking Love and Hate in Modern Fandoms: Part One — Heated Rivalry

    Why I Built This

    I originally thought tracking a TV show’s fandom growth on Reddit would be straightforward.

    Count subscribers.
    Track users online.
    Check back later and see if the numbers went up or down.

    That assumption turned out to be wrong.

    As I started looking more closely—first casually, then programmatically—it became clear that Reddit no longer exposes fandom size and activity in a way that’s consistent, comparable, or easy to track over time. Subscriber counts, “users online,” and newer UI-only labels often tell different stories depending on where and how you look.

    So instead of trying to force unreliable metrics to behave, I built a small tracker to answer a simpler question:

    If subscriber numbers are increasingly opaque, how does a fandom actually behave when a show is airing?


    The Problem With Traditional Metrics

    On the surface, Reddit still appears to show everything you’d want:

    • member counts,
    • active users,
    • large audience numbers framed as community participation.

    But those numbers are no longer consistent across views. The same subreddit can appear to have wildly different “sizes” depending on whether you’re looking at search results, a community sidebar, or aggregated UI elements that blend multiple related spaces together.

    For hobbyist analysts—or anyone trying to track how a TV show grows as it becomes more popular—this creates a real problem:
    you can’t rely on subscriber counts alone to measure engagement anymore.

    Rather than treating this as a bug to work around, I treated it as a signal that the metric itself had become less useful.


    A Different Approach: Comments as Engagement

    If membership numbers are increasingly abstracted, discussion isn’t.

    People still comment.
    They still argue.
    They still react in real time to episodes, trailers, and announcements.

    So for Heated Rivalry, I built a tracker that focuses on comment activity, not just who clicked “join.”

    The idea was simple:

    • Identify episode discussion threads.
    • Track how their comment counts change over time.
    • Separate episode discussions from trailers and general posts.
    • Treat comments as a proxy for sustained engagement, not passive interest.

    This doesn’t measure “how many people know about the show.”
    It measures how many people are actively participating in it.


    What the Tracker Looks At

    For this first case study, the tracker focuses on posts about Heated Rivalry within broader TV discussion spaces:

    • r/television episode discussion threads (e.g., 1x01, 1x02, etc.)
    • The official trailer post and its discussion
    • A small set of other high-engagement posts mentioning the show

    For each post, the tracker records:

    • comment count,
    • post score (net upvotes),
    • timestamp,
    • and whether it’s episode-related or not.

    By running the tracker repeatedly over time, it builds a longitudinal record of how discussion grows—or stalls—after episodes air.


    Why Comment Growth Matters

    Comments behave differently than votes or subscribers.

    • Votes are fast and shallow.
    • Subscribers are passive and increasingly obscured.
    • Comments require time, attention, and emotional investment.

    Episode discussion threads, in particular, are useful because they:

    • accumulate comments over days, not minutes,
    • reflect both positive and negative reactions,
    • and reveal whether conversation sustains beyond initial release.

    In practice, this makes comment growth a better indicator of fandom intensity than headline audience numbers.


    What This First Pass Shows

    The Heated Rivalry tracker is a Python script that queries r/television via Reddit’s public JSON search, tags posts as episode discussions (e.g., 1x01/1x02), an official trailer thread, or other mentions, and then captures each post’s num_comments, score, and permalink into CSV outputs.

    On each run it appends those comment counts into a longitudinal comment_history.csv, so the same threads can be measured repeatedly as they accumulate discussion over time.

    In our first pass it successfully separated episode discussion threads from non-episode chatter, identified the trailer thread, and produced a ranked list of a few other high-comment posts for comparison. The resulting plots/dashboard are designed to show whether engagement is concentrated in episode discussions (sustained growth) versus one-off posts (short spikes), using comments as the “true” activity signal rather than inconsistent UI membership numbers.


    What Comes Next

    This post is Part One of a broader series on measuring fandom response when platforms redefine their metrics.

    Next:

    • I’ll expand this approach to other shows,
    • compare episode-to-episode engagement patterns,
    • and apply the same methodology to much larger, more polarized fandoms.

    The goal isn’t to rank fandoms.


    It’s to understand how they actually behave when love, disappointment, and debate all show up in the same comment thread.

    If you want to see or run the Python script used for this analysis, the full repository is here:

    👉 GitHub: https://github.com/jjf3/rewindos_heated_rivalry_tracker