sitemap

a very hacky script to generate a xml item for your rss feed

as of august 2025, this script has FINALLY been updated to write the feed automatically instead of being a copy-paste generator! no idk why it took me so long to do it. the original script is still here though. and no, i still don't know python.

so rss is this really cool tech we all love but when you're writing your entire static website by hand you also have to update your feed by hand and it gets just slightly annoying. i don't really know python but it's a testament to the language's noob-friendliness that i quickly wrote a basic script to do this for me, saving probably no more than two minutes of effort whenever i update my feed.

readme

look i'm sorry but my guide pressuposes basic command line stuff like knowing where you are and how to change directories. i believe in you. oh and obviously you need python. i'm pretty sure the script doesn't need v3, but it's what i have in this machine.

i'm putting the cart before the horse but if you're on 3.6+ the new_entry template can be tidied up thanks to f-strings. i didn't change the whole script because i think it's nice to keep it as compatible as possible (within my knowledge) just in case, because i find python versioning kinda confusing. but here's the code in question. it's just a matter of aesthetics.

code
new_entry = f"""<!-- newentry -->

    <item>
      <title>{title}</title>
      <pubDate>{pub_date}</pubDate>
      <link>{WEBSITE}{path}</link>
      <description><![CDATA[ 
      {description}
       ]]></description>
    </item>
"""

if you'd rather have a graphical interface, i recommend karma chameleon's tool, which creates copy-pastable entries with js on your browser. russhdown is a similar tool that can generate and update the xml file itself, it's meant more as a rss-as-social-media thing where you use it for posts rather than to talk about updates, but you can use it in whatever way you like so. i'm sharing.

how it works

the script has comments and it's really not complicated (though it is suboptimal and a little idiosyncratic, it adds to the charm). but here's a simple explanation so you can understand what's going on and how to modify things if needed.

there are five steps: 1) it gets the current date / time. 2) it prompts you for the title, path and description of your entry, and puts those into an item template. 3) it opens your feed, searches for a comment saying "next entry here!" and replaces it with the new entry (while adding a new "next entry here!" comment). 4) it removes the oldest entry to keep things tidy (this is optional). 5) it writes the new feed.xml file.

because it relies on search and replace, it expects a bit of a template. your xml feed should already be set up (here's a simple rss setup guide for that), named feed.xml and placed in the same folder as the script. both of these things can be very easily changed.

it also expects new entries to go at the top, where you'll write <!-- newentry -->. this is easy to change too. you can see what that look likes on my feed.

using the script

the script


#### IMPORTING MODULES ####

from time import localtime, strftime
import re


#### CONSTANTS ####

# edit these. keep quotes where they exist!

UTC_OFFSET = "-0300"
WEBSITE = "https://example.com/"       # trailing slash expected
FEED_NAME = "feed.xml"       # use the path relative to the script (untested if they're on diferent dirs btw)
DELETE_OLDEST = True


#### MAKING THE ENTRY ####

# get date time and format it properly
pub_date = strftime("%a, %d %b %Y %H:%M:%S " + UTC_OFFSET, localtime())

# get info like title etc and create the new entry
title = input("title: ")
path = input("path: ")
description = input("description: ")

new_entry = """<!-- newentry -->

    <item>
      <title>""" + title + """</title>
      <pubDate>""" + pub_date + """</pubDate>
      <link>""" + WEBSITE + path + """</link>
      <description><![CDATA[ 
      """ + description + """
       ]]></description>
    </item>
"""



#### SEARCH AND REPLACE ####

with open(FEED_NAME, 'r') as file:         # this gets it as read-only (so nothing gets borked)
    feed = file.read()
    feed = feed.replace('<!-- newentry -->', new_entry)

    ## POP THE OLDEST ENTRY ##

    if DELETE_OLDEST == True:

        find_items = re.findall(r"<item>.+?</item>", feed, re.M|re.S)
        oldest_entry = find_items[-1]

        feed = feed.replace(oldest_entry, "")



#### WRITE THE NEW FILE ####

with open(FEED_NAME, 'w') as file:         # now we open it in write mode
    file.write(feed)


#### BEING CUTES ####

print('\033[1;32m' + "\n== SUCCESS ^_^ ==\n" + '\033[0m')
    

the legacy script

this script simply prints an entry to your console, which you then copy and paste into your feed xml file. just save that and upload to your website however you usually do.

you need to edit two things here, the UTC offset and your website url (trailing slash expected).

code

from time import localtime, strftime

# get date time and format it properly

# change the '-0300' part to your UTC offset 
pub_date = strftime("%a, %d %b %Y %H:%M:%S -0300", localtime())

# get info like title etc
title = input("title: ")
path = input("path: ")
description = input("description: ")

# final copy pastable item
print(
"""
<item>
      <title>""" + title + """</title>
      <pubDate>""" + pub_date + """</pubDate>
      <link>https://YOUR WEBSITE HERE/""" + path + """</link>
      <description><![CDATA[ 
      """ + description + """
       ]]></description>
  </item>
"""
)
        

you're welcome to modify and redistribute both scripts to your heart's content. these are public domain, who cares. and maybe let me know if you can automate the utc offset part thing?


sitemap