Macros in archived MDN pages

We would like to delete unused KumaScript macros, so it’s easier for us to improve the macros that we actually use and need. Many macros are used only (or mostly) in MDN pages that have been archived (for example, there are a bunch of B2G ones).

If we just delete these macros from KumaScript without updating the MDN pages to remove the places they are called, then we will see errors in those pages (as in the pages listed here https://developer.mozilla.org/en-US/docs/with-errors).

I can think of three approaches to this problem:

  1. just delete the macros from KumaScript, and don’t worry about errors in archived pages. This is the least work approach. Since the pages are archived anyway, it doesn’t matter too much that they now contain errors. This approach has 2 drawbacks:
  1. update all the archived pages to remove the macro calls, then delete the macros. The problem with this is that it would involve updating thousands of pages manually.

  2. render all the archived pages, running the macros, then replace the pages with the rendered version. So the archived pages would no longer contain any macro calls. This seems like a good approach, but I don’t know what the difficulty of doing it would be.

1 Like

To help me think about the “best” scenario: Is there any table/dashboard that would allow us to know exactly the macros only targeting archived content?

I think (3) is the clear best choice here. One thing we might consider: should we move archived content off MDN entirely, to a separate mdn-archive site? That would give us control over appearance, indexing, and more, which we don’t have now and have no realistic prospects for having anytime soon, even though fixing those issues would improve the overall SEO of the primary content on MDN.

My vote is for 3, but just as another page version. The “no-macro” version would be the latest version, and you could revert to a previous version to restore macros (for example, if we un-archive a page, which happened recently for an RSS reference page). This tactic can be done today, by any editor, with no additional Kuma features.

For example, this page contains a few macros:

In fact, most of the content is from macros, as you can see from the raw version:

https://developer.mozilla.org/en-US/docs/Archive/B2G_OS?raw

You can request the rendered content:

https://developer.mozilla.org/en-US/docs/Archive/B2G_OS?raw&macros

You can then copy the page source, and paste it into the editor (using the “Source” view, not the WYSIWYG view).

We could aim for this across the Archive, or selectively to get rid of particular macros.

1 Like

I also think three makes most sense. John’s suggestion sounds like a good low cost solution.

Is the suggestion that we do this manually? I count over 2300 archived pages just in en-US, this doesn’t sound very low cost.

Or could we use the Kuma PUT API for this?

Doing it manually sounds like a painful job that we shouldn’t waste time on. If it’s to be done, it should be done using the PUT API or some other form of automation.

I’d like to build automation for changes like this, but realistically we haven’t found time to improve the platform this way in the last few years, and I don’t see this changing in the next 12 months unless there is a significant change in the team. It’s best to plan with the tools we have (PUT API, spreadsheet-driven projects).

Yes, certainly, I didn’t mean to imply that the dev team should build new automation capabilities. I meant to suggest the theoretical possibility of using a client-side automation system such as AppleScript or other similar technologies to drive the UI automatically to make these changes.

Then here’s a suggestion. We write a script that, given a set of MDN URLs:

  • for each URL:

    • fetch the “rendered” content of the page by fetching https://developer.mozilla.org/path/to/the/page?raw&macros
    • write that content back to the page using the Kuma PUT API

We could do this for archived pages, and also for pages that are not actively maintained, like the XPCOM interface reference.

Does this seem like a sensible approach?

We used the Kuma PUT API pretty extensively when we were creating the initial set of WebExtension docs, and it worked very well for us. We should start by testing it out on staging, and then on a small subset of the Archive docs.

2 Likes

That’s my thinking, Will.