Steps for migrating a blog from Ghost to Hugo
Run the migration tool for Ghost
A migration tool to convert json files exported from the web console of Ghost, to individual Markdown files for each article is available.
https://github.com/jbarone/ghostToHugo
$ ghostToHugo export.json
You may need to add -d
option to adjust timestamp format. If the converted contents have a timestamp with ISO 8601 format, you can set the command as follows:
$ ghostToHugo -d "2006-01-02T15:04:05Z" export.json
Modify contents
Sinse the Markdown files right after conversion cannot be sometimes displayed as previously, you may need some changes in the contents. A best practice for automatic modification could be to use sed
command to convert multiple files as once.
$ sed -ri 's/stings searched/strings converted to/' *
You will need to use standard expression well.
Set URL aliases
https://gohugo.io/content-management/urls/#how-hugo-aliases-work
In case the URLs are changed from your previous blog due to the URL structure of Hugo, you could set an aliases
tag in Front Matter to redirect to the new URL.
+++
aliases = ["/old-contents/"]
+++
Convert img tags to figure tag with shortcode
https://gohugo.io/content-management/shortcodes/#figure
If using figure tag is more appropriate, also depending on which theme you use, Hugo's built-in shortcode could be used.
{{< figure src="https://url/mage.png" alt="image name" >}}
Reflect raw html tags
https://anaulin.org/blog/hugo-raw-html-shortcode/
Hugo sometimes does not reflect raw html tags. You could do this to create your own shortcode.
Creat a file layouts/shortcodes/rawhtml.html
and describe the following in the file.
{{.Inner}}
Then put the raw html part between the following tags.
{{< rawhtml >}}<br />{{< /rawhtml >}}
This would be useful when you would like to use html tags not suppoted by Markdown or to put new lines in a table.
Manage files locally
When you think about such as downloading images in the articles at once and managing the files in a repository, you could extract the whole list of the URL of the images in each article and use wget
to download the material.
sed -rn 's/^image = "(.*)"$/\1/p' * > urls.txt
wget -i urls.txt