Jack Bodine

Expanding The Short Form Video App

March 2026

This is the third and final part of my series on making a self-hosted short form media app. The first two parts can be read here and here if you haven’t already done so. This third part focuses on adding some useful extensions to the project we have completed up to this point. I will also elaborate a bit further on my motivation for this project.

Adding Image Support

The instagram scraping script we wrote in Part I, indiscriminately scrapes both video posts and image posts. Until now the app completely ignored the downloaded jpgs and solely focused on processing and serving the videos. There are a couple considerations when adding support to also ingest and serve the image posts. First off, a specific instagram post can consist of multiple images, in which case we want them all grouped and served together. Secondly, not all .jpgs in the raw media folder are posts, in fact most of them are thumbnails attached to some video post, so we need to deliberately select which jpgs get processed.

The ideal way to implement image posts in the front end would be to create a paginated horizontal scroll view for each image in a post; and to serve either the video player or scroll view depending on the next post. However, this would require quite a bit of extra work on the frontend which is unnecessary for the scope of this project. Instead we can take a clever shortcut. All we have to do is tweak our HLS conversion script to also convert the image files to HLS. This way, there is no necessary change to the frontend of the app. By tricking the native video player into treating static image carousels as low-framerate video streams, we completely bypass the headache of writing and maintaining a separate, complex image pagination UI.

The following code, which you can simply append to the preprocess script from Part 1, iterates over every image in the raw directory. First it checks if a processed directory already exists, if it does, that means it was a video that has already been processed. Second, it checks how many images share the same base name. In the case there is just one, it sets the ffmpeg arguments to convert it to a single segment HLS file. If there are multiple images that share the base name, each become one segment of the resulting file. Keys are processed the exact same way as they were in Part 1.

for f in "$RAW_DIR"/*.@(jpg|webp); do
    [ -f "$f" ] || continue

    filename=$(basename -- "$f")
    no_ext="${filename%.*}"

    # Strip any trailing _1, _2, etc to get the base folder name
    foldername=$(echo "$no_ext" | sed -E 's/_[0-9]+$//')

    OUTPUT_FOLDER="$PROCESSED_DIR/$foldername"

    # If the folder already exists, it means either Pass 1 grabbed the mp4,
    # or an earlier pass over '_1.jpg' already processed this carousel. 
    if [ -d "$OUTPUT_FOLDER" ]; then
        continue
    fi

    echo "Processing IMAGE/GROUP: $foldername..."

	# Same as video processing
    KEY_FOLDER="$KEYS_DIR/$foldername"
    mkdir -p "$KEY_FOLDER"
    openssl rand 16 > "$KEY_FOLDER/video.key"
    IV=$(openssl rand -hex 16)
    echo "$BASE_KEY_URL/$foldername/key" > "$KEY_FOLDER/key_info"
    echo "$KEY_FOLDER/video.key" >> "$KEY_FOLDER/key_info"
    echo "$IV" >> "$KEY_FOLDER/key_info"

    declare -a FFMPEG_ARGS

   # Determine if its a carousel or a single image
   group_jpgs=("$RAW_DIR/${foldername}_"*.jpg)
   group_webps=("$RAW_DIR/${foldername}_"*.webp)

   if [ ${#group_jpgs[@]} -gt 0 ]; then
       # Grouped JPG
       FFMPEG_ARGS=("-framerate" "1/2" "-i" "$RAW_DIR/${foldername}_%d.jpg" "-c:v" "libx264" "-vf" "format=yuv420p,scale=trunc(iw/2)*2:trunc(ih/2)*2")
   elif [ ${#group_webps[@]} -gt 0 ]; then
       # Grouped WEBP
       FFMPEG_ARGS=("-framerate" "1/2" "-i" "$RAW_DIR/${foldername}_%d.webp" "-c:v" "libx264" "-vf" "format=yuv420p,scale=trunc(iw/2)*2:trunc(ih/2)*2")
   elif [[ "$filename" == *.jpg ]]; then
        # Single JPG
        FFMPEG_ARGS=("-loop" "1" "-i" "$f" "-t" "3" "-c:v" "libx264" "-vf" "format=yuv420p,scale=trunc(iw/2)*2:trunc(ih/2)*2")
    elif [[ "$filename" == *.webp ]]; then
        # Single WEBP
        FFMPEG_ARGS=("-loop" "1" "-i" "$f" "-t" "3" "-c:v" "libx264" "-vf" "format=yuv420p,scale=trunc(iw/2)*2:trunc(ih/2)*2")
    fi

    if ffmpeg -y "${FFMPEG_ARGS[@]}" -r 30 -g 60 -keyint_min 60 -sc_threshold 0 \
        -hls_time 2 -hls_list_size 0 \
        -hls_key_info_file "$KEY_FOLDER/key_info" \
        -hls_segment_filename "$OUTPUT_FOLDER/chunk_%03d.ts" \
        "$OUTPUT_FOLDER/index.m3u8" ; then

        echo "Finished $foldername"
    else
        echo "FFmpeg failed on $foldername."
    fi
    rm -f "$KEY_FOLDER/key_info"
done

That’s it! Without touching any frontend code, we could now run the sync endpoint and our app would be serving both videos and images. I think it’s super cool we can do this without touching the frontend. However, it would be nice to pass along the information and show the user how many images are connected to that post. This can easily be done with an update to the sync function, model and UI.

# In sync_videos()
            meta_is_video = os.path.exists(f".../Reels/Raw/{item}.mp4")

            meta_post_image_count = 1
            if not meta_is_video:
                jpgs = glob.glob(f".../Reels/Raw/{item}_*.jpg")
                webps = glob.glob(f".../Reels/Raw/{item}_*.webp")
                carousel_count = len(jpgs) + len(webps)

                if carousel_count > 1:
                    meta_post_image_count = carousel_count
                    

...

			new_video = Video(  
			    ...
			    is_video=meta_is_video,  
			    post_image_count=meta_post_image_count  
			)

Then after updating the model to support the post_image_count and is_video columns, adding the following lines to the ReelsOverlay.swift in the frontend.

metadataRow(label: "TYPE", value: video.isVideo ?? true ? "VIDEO" : "IMAGE")
if video.isVideo == false {
	metadataRow(label: "IMAGE COUNT", value: "\(video.postImageCount ?? 1)")
}

Progress Bar

The next fun addition is a progress bar that shows how far into each video you are. We can also expand this to show how many total images are in each photo post.

To pull this off in SwiftUI, we need to track the current time of the AVPlayer and use that to calculate a percentage for our progress bar. We also need to calculate which page of a photo post we are currently viewing. Because we set our image segments to be exactly 2 seconds long in the previous section, to calculate the current page we just divide the current time by two.

Let’s start by adding the necessary state tracking and computed properties to our ReelsOverlay view:

@State private var currentTime: Double = 0
@State private var activeObserver: (player: AVPlayer, token: Any)? = nil

private var currentPage: Int {
    let count = video.postImageCount ?? 1
    return min(Int(currentTime / 2.0) + 1, count)
}

private var videoProgress: Double {
    guard let item = player.currentItem else { return 0 }
    let duration = item.duration.seconds
    guard duration > 0, duration.isFinite else { return 0 }
    return min(max(currentTime / duration, 0), 1)
}

With the logic in place, we can build the UI. We’ll use a GeometryReader placed at the top of our main ZStack to draw an edge-to-edge line across the screen. We animate the width of the inner rectangle so that it smoothly ticks forward as the videoProgress increases.

GeometryReader { geo in
	Rectangle()
		.fill(Color.white.opacity(0.3))
		.overlay(alignment: .leading) {
			Rectangle()
				.fill(Color.white)
				.frame(width: geo.size.width * CGFloat(videoProgress))
				.animation(.linear(duration: 0.1), value: videoProgress)
		}
}
.frame(height: 2)
.zIndex(100) // Ensure it stays on top of all the other overlay UI

The last and most important step is actually feeding the time data from the AVPlayer into our currentTime variable. Thankfully, AVPlayer provides a method called addPeriodicTimeObserver for exactly this purpose.

We can attach this observer when the view appears. However, because our app uses an infinite scroll view that recycles video players, if we don’t manually remove this observer when the video disappears from the screen, it will create a memory leak and crash the app.

We handle this safely by saving the observer token to our activeObserver state tuple.

.onAppear {
    // Update 10 times a second
    let interval = CMTime(seconds: 0.1, preferredTimescale: 600)
    let token = player.addPeriodicTimeObserver(forInterval: interval, queue: .main) { time in
        currentTime = time.seconds
    }
    activeObserver = (player, token)
}
.onDisappear {
	if let observerData = activeObserver { 
		observerData.player.removeTimeObserver(observerData.token) 
		activeObserver = nil 
	} 
}

To finish this part off I added a progress badge at the top of all image posts, to convey clearly to the user if they are seeing a video or if they should wait for another image.

            HStack(spacing: 8) {
                if video.isVideo == false {
                    Text("Image")
                        .font(.caption.weight(.bold))
                        .foregroundColor(.white)
                        .padding(.horizontal, 10)
                        .padding(.vertical, 4)
                        .background(Color.cyan.opacity(0.8))
                        .clipShape(Capsule())
                    
                    if let count = video.postImageCount, count > 1 {
                        Text("\(currentPage)/\(count)")
                            .font(.caption.weight(.bold))
                            .foregroundColor(.white)
                            .padding(.horizontal, 10)
                            .padding(.vertical, 4)
                            .background(Color.black.opacity(0.6))
                            .clipShape(Capsule())
                    }
                }
            }
            .frame(maxWidth: .infinity, alignment: .center)
            .padding(.top, 20)

Hashtags

Hashtags are a staple feature in virtually all social media apps. In our case, the video description already include each uploaders designated tags written as plain text, but we can expand on this to support explicitly adding and removing tags from our database, and eventually filtering our feed by them.

To make this work on the backend, we first need to set up a many-to-many relationship using SQLAlchemy, since a video can have multiple tags, and a single tag will likely belong to multiple videos. This means we need to create a new ‘tags’ table and tag-video relationship table to the database. We also need to define our schemas and create a couple of routes: one to fetch all available tags, and a single “toggle” endpoint to handle both adding and removing tags from a specific video.


# Models
video_tag_association = Table(  
    'video_tag', Base.metadata,  
    Column('video_id', Integer, ForeignKey('videos.id')),  
    Column('tag_id', Integer, ForeignKey('tags.id'))  
)

class Tag(Base):  
    __tablename__ = "tags"  
    id = Column(Integer, primary_key=True, index=True)  
    name = Column(String, unique=True, index=True)  
    videos = relationship("Video", secondary=video_tag_association, back_populates="tags")  
  
    @property  
    def video_count(self):  
        return len(self.videos)
        
class Video(Base):  
    ...
    tags = relationship("Tag", secondary=video_tag_association, back_populates="videos")  

# Schemas
class TagOut(BaseModel):  
    id: int  
    name: str  
    video_count: int  
    model_config = {"from_attributes": True}  
  
class TagSimpleOut(BaseModel):  
    id: int  
    name: str  
    model_config = {"from_attributes": True}
    
# Routes
@router.post("/api/videos/{folder_name}/tags/{tag_name}/toggle", dependencies=[Depends(verify_api_key)])
def toggle_video_tag(folder_name: str, tag_name: str, db: Session = Depends(get_db)):
    video = db.query(Video).filter(Video.folder_name == folder_name).first()
    if not video:
        raise HTTPException(status_code=404, detail="Video not found")

    clean_tag = tag_name.strip().lower()
    tag = db.query(Tag).filter(Tag.name == clean_tag).first()

    if not tag:
        tag = Tag(name=clean_tag)
        db.add(tag)

    if tag in video.tags:
        video.tags.remove(tag)
        # Cleanup if the tag is no longer used by any video
        video_count = db.query(video_tag_association).filter(video_tag_association.c.tag_id == tag.id).count()
        if video_count == 0:
            db.delete(tag)
    else:
        video.tags.append(tag)

    db.commit()
    db.refresh(video)

    return {"message": "Tag toggled", "tags": video.tags}

@router.get("/api/tags", response_model=List[TagOut], dependencies=[Depends(verify_api_key)])
def get_all_tags(db: Session = Depends(get_db)):
    results = db.query(
        Tag.id,
        Tag.name,
        func.count(video_tag_association.c.video_id).label('video_count')
    ).join(video_tag_association, isouter=True) \
        .group_by(Tag.id).all()

    return [{"id": r.id, "name": r.name, "video_count": r.video_count} for r in results]

Instead of manually tagging thousands of posts, we can update our sync_videos function to automatically extract any existing hashtags from the caption using a simple regex search, and populate our new database tables during intake. We add the following to the end of the sync script:

def sync_videos():
	...
	
	if active_vid.description:  
	    # Lowercase inside the comprehension
	    extracted = set(ht.lower() for ht in re.findall(r"#(\w+)", active_vid.description))  
	    for clean_t in extracted:  
	        if not any(h.name == clean_ht for h in active_vid.hashtags):  
	            db_tag = db.query(Tag).filter(Tag.name == clean_t).first()  
	            if not db_tag:  
	                db_tag = Tag(name=clean_t)  
	                db.add(db_tag)  
	                db.flush()
	            active_vid.hashtags.append(db_tag)

Moving over to the frontend, we need to mirror these changes. First, we define a Swift model to match the TagOut schema we created on our server.

In Tag.Swift…

struct Tag: Codable, Identifiable {
    let id: Int
    let name: String
    let videoCount: Int?
    
    enum CodingKeys: String, CodingKey {
        case id
        case name
        case videoCount = "video_count"
    }
}

Next, we update our NetworkManager to hit the new API endpoints we just wrote. These are not much different than the toggleFavorite, and setRating functions we wrote in Part II.

	var allTags: [Tag] = []
	
	...

    func toggleTag(for folderName: String, tagName: String) async {
        let url = baseURL.appendingPathComponent("api/videos/\(folderName)/tags/\(tagName)/toggle")
        let request = authenticatedRequest(for: url, method: "POST")
        
        do {
            let (data, response) = try await URLSession.shared.data(for: request)
            if let http = response as? HTTPURLResponse, http.statusCode == 200 {
                struct TagResponse: Codable { let tags: [Tag] }
                let result = try JSONDecoder().decode(TagResponse.self, from: data)
                mutateVideo(folderName: folderName) { $0.tags = result.tags }
            }
        } catch {
            print("Failed to toggle tag: \(error)")
        }
    }
    
    func fetchAllTags() async {
        let url = baseURL.appendingPathComponent("api/tags")
        do {
            let request = authenticatedRequest(for: url)
            let (data, _) = try await URLSession.shared.data(for: request)
            let decodedTags = try JSONDecoder().decode([Tag].self, from: data)
            self.allTags = decodedTags
        } catch {
            print("Failed to fetch all tags: \(error)")
        }
    }

Finally, we can tie it all together in the UI. In ReelsOverlay.swift, we need to add a horizontal scroll view to display the currently assigned tags just above the video description, and a sheet containing a LazyVGrid to allow the user to search, create, and toggle tags on the fly.

    // Tag Management State
    @State private var showTagSheet = false
    @State private var showDeleteAlert = false
    @State private var tagToDelete: Tag? = nil
    @State private var newTagText = ""
    @State private var stableSortedTags: [Tag] = []
    
    ...
    
    // Scrollable Tags Row
    ScrollView(.horizontal, showsIndicators: false) {
         HStack(spacing: 8) {
             if let tags = video.tags, !tags.isEmpty 
             ForEach(tags) { tag in
                Text("#\(tag.name)")
	             .font(.system(size: 12, weight: .bold))
	             .foregroundColor(.white)
	             .padding(.horizontal, 12)
	             .padding(.vertical, 6)
	             .background(Color.black.opacity(0.5))
	             .clipShape(Capsule())
	             .onTapGesture {
		            tagToDelete = tag
                    showDeleteAlert = true
                 }
             }
     }
                                    
                                    Button(action: { showTagSheet = true }) {
                                        Image(systemName: "plus")
                                            .font(.system(size: 12, weight: .black))
                                            .foregroundColor(.white)
                                            .padding(8)
                                            .background(Color.black.opacity(0.5))
                                            .clipShape(Circle())
                                    }
                                }
                            }
                            .frame(maxWidth: 280)
                            
                            ...
                            .sheet(isPresented: $showTagSheet) {
            tagSheetContent
        }
        
        ...
            private var displayTags: [Tag] {
        let trimmed = newTagText.trimmingCharacters(in: .whitespaces).lowercased()
        if trimmed.isEmpty { return stableSortedTags }
        
        var tags = stableSortedTags.filter { $0.name.lowercased().contains(trimmed) }
        
        // If the exact search term doesn't exist, append a placeholder to create it
        if !stableSortedTags.contains(where: { $0.name.lowercased() == trimmed }) {
            tags.append(Tag(id: -1, name: trimmed, videoCount: nil))
        }
        
        return tags
    }
        

And here is the view layout for the bottom sheet itself, providing a clean search bar and an adaptive grid of selectable tag pills.

private var tagSheetContent: some View {
        NavigationStack {
            VStack(spacing: 0) {
                // Search
                HStack {
                    Image(systemName: "magnifyingglass")
                        .foregroundColor(.secondary)
                    TextField("Search or create tag...", text: $newTagText)
                        .textFieldStyle(.plain)
                        .textInputAutocapitalization(.never)
                        .autocorrectionDisabled()
                    if !newTagText.isEmpty {
                        Button(action: { newTagText = "" }) {
                            Image(systemName: "xmark.circle.fill")
                                .foregroundColor(.secondary)
                        }
                    }
                }
                .padding(12)
                .background(Color.secondary.opacity(0.1))
                .cornerRadius(10)
                .padding()
                
                Divider()
                
                //List
                ScrollView {
                    VStack(alignment: .leading, spacing: 16) {
                        Text("All Tags")
                            .font(.system(size: 14, weight: .bold))
                            .foregroundColor(.secondary)
                            .padding(.horizontal)
                            .padding(.top, 16)
                        
                        LazyVGrid(columns: [GridItem(.adaptive(minimum: 100))], alignment: .leading, spacing: 10) {
                            ForEach(displayTags) { dbTag in
                                let isSelected = video.tags?.contains(where: { $0.name == dbTag.name }) ?? false
                                
                                Button(action: {
                                    Task {
                                        await networkManager.toggleTag(for: video.folderName, tagName: dbTag.name)
                                        if dbTag.id == -1 {
                                            newTagText = ""
                                            await networkManager.fetchAllTags()
                                        }
                                    }
                                }) {
                                    HStack(spacing: 4) {
                                        if dbTag.id == -1 { Image(systemName: "plus") }
                                        Text(dbTag.id == -1 ? "Create \"#\(dbTag.name)\"" : "#\(dbTag.name)")
                                    }
                                    .font(.system(size: 15, weight: .semibold))
                                    .padding(.horizontal, 14)
                                    .padding(.vertical, 10)
                                    .background(isSelected ? Color.blue : Color.secondary.opacity(0.1))
                                    .foregroundColor(isSelected ? .white : (dbTag.id == -1 ? .blue : .primary))
                                    .clipShape(Capsule())
                                    .overlay(dbTag.id == -1 ? Capsule().stroke(Color.blue, lineWidth: 1) : nil)
                                }
                            }
                        }
                        .padding(.horizontal)
                    }
                }
            }
            .navigationTitle("Tags")
            .navigationBarTitleDisplayMode(.inline)
            .toolbar {
                ToolbarItem(placement: .navigationBarTrailing) {
                    Button("Done") { showTagSheet = false }
                }
            }
        }
        .presentationDetents([.medium, .large])
        .task {
            await networkManager.fetchAllTags()
            stableSortedTags = networkManager.allTags.sorted { $0.name < $1.name }
        }
    }

Filters

There isn’t much purpose in adding all this user data to posts if it isn’t utilized, and the clearest way to do so is by using it to filter posts. Maybe you only want to see posts you’ve previously favorited, or maybe you specifically want to browse videos tagged with #art while excluding anything tagged with #memes.

To support this, we need to create a filtering system. The cleanest approach on the backend is to define a Pydantic schema specifically for our filter criteria, and then create a new POST endpoint that dynamically builds a SQLAlchemy query based on whatever parameters the client sends.

class FeedFilter(BaseModel):
    require_tags: Optional[List[str]] = []
    exclude_tags: Optional[List[str]] = []
    only_favorites: Optional[bool] = False
    min_rating: Optional[int] = 0
    limit: Optional[int] = 10
    offset: Optional[int] = 0
    randomize: Optional[bool] = False
    seen_ids: Optional[List[int]] = []

@router.post("/api/feed/filter", response_model=List[VideoOut], dependencies=[Depends(verify_api_key)])
def get_filtered_feed(filters: FeedFilter, db: Session = Depends(get_db)):
    """Returns a list of videos matching specific criteria."""
    query = db.query(Video)

    if filters.only_favorites:
        query = query.filter(Video.is_favorited == True)

    if filters.min_rating > 0:
        query = query.filter(Video.rating >= filters.min_rating)

    if filters.require_tags:
        for tag_name in filters.require_tags:
            clean_tag = tag_name.strip().lower()
            query = query.filter(Video.tags.any(Tag.name == clean_tag))

    if filters.exclude_tags:
        for tag_name in filters.exclude_tags:
            clean_tag = tag_name.strip().lower()
            query = query.filter(~Video.tags.any(Tag.name == clean_tag))

    if filters.seen_ids:
        query = query.filter(~Video.id.in_(filters.seen_ids))

    if filters.randomize:
        query = query.order_by(func.random())
    else:
        query = query.order_by(Video.id.desc())

    return query.offset(filters.offset).limit(filters.limit).all()

Per usual, we need to mirror this structure on the frontend by creating a Swift Codable struct that matches the Pydantic model exactly.

struct FeedFilter: Codable {
    var requireTags: [String]?
    var excludeTags: [String]?
    var onlyFavorites: Bool?
    var minRating: Int?
    var limit: Int?
    var offset: Int?
    var randomize: Bool?
    var seenIds: [Int]?
    
    enum CodingKeys: String, CodingKey {
        case requireTags = "require_tags"
        case excludeTags = "exclude_tags"
        case onlyFavorites = "only_favorites"
        case minRating = "min_rating"
        case limit
        case offset
        case randomize
        case seenIds = "seen_ids"
    }
}

Next, we need our NetworkManager to track whether a filter is currently active, save it across app launches using UserDefaults, and route our feed requests to the new endpoint whenever a filter is applied.

var activeFilter: FeedFilter? {
        get {
            if let data = UserDefaults.standard.data(forKey: "savedFeedFilter"),
               let filter = try? JSONDecoder().decode(FeedFilter.self, from: data) {
                return filter
            }
            return nil
        }
        set {
            if let newValue = newValue,
               let data = try? JSONEncoder().encode(newValue) {
                UserDefaults.standard.set(data, forKey: "savedFeedFilter")
            } else {
                UserDefaults.standard.removeObject(forKey: "savedFeedFilter")
            }
        }
    }

    private func buildFeedRequest() throws -> URLRequest {
        var currentFilter = activeFilter ?? FeedFilter(limit: 10)
        
        currentFilter.randomize = true 
        
        // If there is an active filter, use the new POST endpoint
        if activeFilter != nil {
            let url = baseURL.appendingPathComponent("api/feed/filter")
            var request = authenticatedRequest(for: url, method: "POST")
            request.httpBody = try JSONEncoder().encode(currentFilter)
            return request
        } else {
            // Fall back to the basic random endpoint if no filters are applied
            let url = baseURL.appendingPathComponent("api/feed/random")
            return authenticatedRequest(for: url)
        }
    }

    func fetchVideos(for tag: String, limit: Int = 20, seenIds: [Int] = []) async -> [Video] {
        var filter = FeedFilter(requireTags: [tag], limit: limit)
        filter.randomize = true
        
        if !seenIds.isEmpty { filter.seenIds = seenIds }
        
        let url = baseURL.appendingPathComponent("api/feed/filter")
        var request = authenticatedRequest(for: url, method: "POST")
        
        do {
            request.httpBody = try JSONEncoder().encode(filter)
            let (data, _) = try await URLSession.shared.data(for: request)
            return try JSONDecoder().decode([Video].self, from: data)
        } catch {
            print("Failed to fetch videos for tag \(tag): \(error)")
            return []
        }
    }

Lastly, we need to build the UI for the user to actually set these parameters. I built a FilterSheet that heavily utilizes a custom FilterAccordion view to keep each filter option organized. When the user taps “Apply”, it evaluates if the filter is essentially empty (and clears it if so) or saves it to the network manager and triggers a feed refresh. I’ve opted to omit this code from the blog post, as UI code is quite long, cluttered, and uncomplicated. However, if you wish, you can see it in the project repository on Github.

Etcetera

I’ve made quite a few more additions to the app; including a custom collections view which allows you to explore all posts with a certain tag in a grid, more filters, and renaming tags or syncing from the client. Importantly I added a custom algorithm which changes the frequency at which posts are shown depending on the rating and which hashtags it contains. I.e, I’ve decreased the frequency of ‘#fyp’ which tends to be engagment bait, and increased more niche topics like ‘#linux’ and ‘#3dprinting’.

I’ll avoid explaining all of my additions in-depth since this post has already become considerably long and the remaining features generally reuse already introduced concepts. This goes to show that what we’ve made serves as a great basis for expanding upon, and there is no shortage of possible additions.

Motivation

As I was working on this project, a landmark social media addiction case just concluded, with the jury siding against Meta. The case ruled that both Instagram and YouTube are deliberately engineered to be addictive, and have caused enough damage to the lives of individuals that they were awarded $6 Million USD. It is absolutely wonderful that companies who have caused so much harm no longer have impunity for their actions. It is virtually a truism now that large tech companies solely act in their own interest with little regard for their users. This one sided relationship makes their products inherently toxic to spend time on.

In my personal journals, there is no topic I have written more at length about than the atrocities of social media. I find it entirely antithetical to the human experience for a multitude of reasons I could not entirely explain here. This may then sound paradoxical at first— why I would make my own pseudo social media app— but I don’t believe that any media format itself can be inherently good or bad. Decentralized social media such as e-mail or the fediverse tends to be significantly more compatible with life. Unfortunately, there isn’t much of an alternative to corporations for SFVs yet, prompting me to come up with my own solution.

In this app we’ve created, you have entire control over the content you see, how you see it, and how much of it you see. These are not abilities that corporations will ever give up willingly, as it gives them a lot of power over their users. When you own the database and control the algorithm, it turns the feed from an infinite slot machine designed to extract engagement into something more delibrate, personal, and useful.

Then, why is it necessary at all to have a SFV app in the first place? What exactly is the benefit of this format if I think it is predisposed to harm? To me, the saved posts act as a bit of a vision board. There are some truly beautiful things people around the world have made and shared online in SFV apps. Seeing such passion in others helps to stir my own, give me motivation and focus. We don’t have to abandon the format entirely just because corporations have misused it. By taking ownership of the software, you can strip away the manipulation and rebuild it into a tool that fosters motivation rather than dependency.

< Return to blog