由 stevefan1999 發表的張貼
張貼在 Blogs 閱讀更多

I kind of like Nuxt.JS.

Despite it being kind of mediocre in architecture, and in each update it always break my app, I still think it is the authentic answer of Vue to Next.JS in the React land.

It has a lot of plugins, asynchronous loading and modules...The only thing I missed is a server backend to handle business logic.

Yet, universal application didn't account for this and forces you to separate it into another domain, I know it has a good intention but this is kind of fussing for me. Did this mean I have to separately start two different Node applications at once?

Fortunately, with this trick the server-side render server can be exploited to add the things you wanted -- Using the hook property in Nuxt!
In your nuxt.config.js, add the following thing:

export const hooks = {
  // Write your hooks here

We will find the hook we need. According to @nuxt/server, this is what we wanted

async setupMiddleware() {  
  // Apply setupMiddleware from modules first  
  await this.nuxt.callHook('render:setupMiddleware', this.app)

And this means we need to add a hook to setupMiddleware in the render namespace:

export const hooks = {
  render: {
    async setupMiddleware (app: any): Promise<void> {
      // Do anything here...

And tada! You have full access to the underlying connect instance of Nuxt.JS. You can prompt to use apollo-server-express to attach an ad-hoc GraphQL server for testing!

export const hooks = {
  render: {
    async setupMiddleware (app: any): Promise<void> {
      // Do anything here...
      const schema = makeExecutableSchema({ ... })
      server = new ApolloServer({ schema })
      server.applyMiddleware({ app })

If you need WebSocket subscription support, you will also need to hook listen event as well:

export const hooks = {  
  listen (httpServer: any): any {

This way you won't get the 503 error since Apollo requires the created HTTP server instance to attach the actual WebSocket listener, not from the Express/Connect instance

張貼在 Blogs 閱讀更多


I loved how NextCloud have rich functionalities that they had such a good plugin ecosystem, but I also want SeaFile level kind of speed, that it is well known how slow NextCloud syncs via WebDAV.

However there are never such integration, and probably never will. And I think that NextCloud as frontend + SeaFile as backend is a good idea...

So, this is the abomination I made. A Docker app that runs SeaDrive and NextCloud altogether

Even if you can, please don't do it out of novel context.

First try

Well, SeaDrive under the hood, is actually a SeaFile client strapped with a FUSE and a polling hook that updates the required repository on the fly, so that means we will need FUSE on Docker first.

    build: ./
    image: nextcloud-seafile
    restart: on-failure
      - /dev/fuse
      - SYS_ADMIN
# Base config

This is because FUSE requires kernel communications and Docker explicitly blocks off all such syscalls -- in this case SYS_ADMIN

The Config

Next up we will need to know how to generate a valid config for SeaDrive.

Copying from the official guide, I used this as an example template:

server = $SEAFILE_HOST
is_pro = $SEAFILE_IS_PRO
client_name = nextcloud
size_limit = 10GB
clean_cache_interval = 10

Remember to add `https?://` for `server` or you're guaranteed to get a segfault.

And this is how I get a token:

curl -s \
  -d "username=$SEAFILE_USERNAME" \
  -d "password=$SEAFILE_PASSWORD" \
  $SEAFILE_HOST/api2/auth-token/ | jq -r '.token'

If `jq` returns `null`, that means there's absolutely something wrong, probably incorrect login credentials.

Based on this authentication context we can assume this is a generic authentication error.


So you thought just using normal seadrive CLI config is enough?

seadrive -f -c /data/$filename -d /data/seadrive-data /data/seadrive-fuse -l /dev/stdout &

Wrong. The official NextCloud image runs Apache and launches under `www-data`, meaning it will not be able to access the FUSE mountpoint, so we will need to add at least `allow_other`. Corrected:

mkdir -p /data/seadrive-fuse # Create if not exist, otherwise FUSE will exit
seadrive -f -c /data/$filename -o nonempty,allow_other -d /data/seadrive-data /data/seadrive-fuse -l /dev/stdout &

Okay...we got the NextCloud login page...We filled in the root admin credentials...Database imported...

What? My data folder is readable by all users so I need to set rwxrwx---?

So the first instinct is that we should use chmod offered by the error message:

chmod -R 770 /data/seadrive-fuse/My\ Library/nextcloud

Annnnnd again no. SeaDrive doesn't currently allow Unix ACL, rather they have their own ACL and I see no other options as well.

Fortunately we can disable the stupid check, so we need to patch it via occ:

sudo -u www-data php occ config:system:set check_data_directory_permissions --value="false" --type=boolean  

Et Voila! Now NextCloud won't bitch again!

You can upload anything from the NextCloud web panel and see that they are also synced into SeaFile (if not instantaneously), sweet!

Next time I should try to see how fast and stable SeaDrive really is under huge pressure to see if it is usable in practice.

But this is still not perfect though, you can see the repo below for some known issues.

POC repo

All this is available in stevefan1999-personal/docker-nextcloud-seafile.

You should know what to do, and you should try this on your own risk.

-- Steve

張貼在 Blogs 閱讀更多


到最後我都係做咗Plan A,姐係將啲config同data抄走落新嘅server到,但係我做咗啲小變種。



而且用Docker我可以isolate走啲port,呢家除咗完全唔駛煩俾人port knock之外(因為所有container都係係一個圍爐NAT,係host完全無expose呢個port),

更加唔駛煩會唔會俾人用0day搞: 就算俾你pwn咗,你都係得個殼。


張貼在 Blogs 閱讀更多





有見及此,我決定用我三個月前剩低嘅士啤,外加$700港紙(Money money money, must be funny...),整咗隻Ghetto 4-Bay NAS。(當然全部用嘅都係山寨貨同洋垃圾😒又話平嘢無好嘢)

舊server spec: arm64 4C4T Cortex-A53 (RK3328), 4GB LPDDR3

新server spec: amd64 8C16T Xeon E5-2660, 8GB DDR3 ECC

驟眼睇落好似無咩嘢喎,普通server upgrade姐?開post做咩呢?係咪諗住炫富?


我唔單止要處理OMV,仲有Seafile,Transmission,Plex,MariaDB + MongoDB,NodeBB,Nginx...

最大問題係,Seafile嘅cluster全部都係small files嚟,就算我做咗一次GC犧牲舊嘅revision縮細,都唔一定可以確保一日內transfer走曬啲cluster...50G咁多嘅small files喎大佬

所以今次負責data migration真係有啲頭痛。果然已經establish好咗嘅嘢無人想掂,姐係同香港IT一樣。


  1. 直抄/etc同/usr,然後clone一隻一模一樣但唔同arch嘅OMV
    1. Won't work,個mount topology已經俾OMV整亂咗,我唔確保最後可以有revoke餘步嘅空間
      1. i.e worst case scenario is the drive is unbootable but data intact
    2. 不過,係origin rootfs嘅dpkg試下加i386/amd64或者可行
      1. 既然有source code,就肯定有ABIAPI compatibility,缺點係有額外空間侵佔
      2. 而且咁都暗示住必須boot from same drive as the old Rock64 origin,無得堂堂正正搬返落RAIDZ
      3. 最後先會考慮
  2. 分開兩個cluster,Rock64就Read-only + Immutable,淨係用新server,兩者互不相干
    1. 咁同開兩個唔同嘅node有咩分別?
    2. 而且咁代表住所有嘢會從頭嚟過
    3. 所以唔可能考慮呢個方案,除非真係最最後走投無路先會咁樣焦土
  3. Rock64嘅Seafile依舊留其他真係從係重頭嚟過
    1. 目前嘅feasible route
    2. 我真係唔想rsync兩三日去transfer 50-100G嘅clusterf__k
      1. 咁做嘅缺點係煩,係人都知兩頭望會好_煩
        1. 而且依舊保留最上面一開始post提出啲問題
      2. 點樣鋪返nginx嘅vhost又係一個問題,我本身nginx backend services係pure localhost...
        1. 唔可以roundtrip兩次(LAN tunnel + Rock64 reverse proxy)
          1. 咁姐係又要搞一輪
        2. 當然,LAN + Reverse proxy應該唔會好慢嘅...咁啲Distributed Filesystem咪仲慢...睇睇先講啦
    3. 咁姐係代表著可以唔用OMV
      1. 都有啲頂唔順呢個垃圾,竟然用PHP
      2. FreeNAS? Rockstor? Synology? Linux From Scratch?
    4. 但係包括呢個NodeBB在內應唔應該搬走呢
      1. 如果留,咁舊Rock64所有DB都可以留做master
      2. (Anyway,啲consequences真係好_麻煩)

當然,既然我隻舊U起死回生,又有咁上下resourcefulness,我會好好用docker同k8s main住,順便當24/7 game server俾大家玩Minecraft/CSGO/Insurgency/etc(咦,home network喎😨咁好似有啲危險喎ching...

係,所以睇睇有冇辦法係其他hosting裝kube-proxy然後tunnel做IP mask😩但係個target host用嘅係OpenVZ,所以唔可以貿貿然用GRE。

本身諗住用zerotier,但因為有個encryption layer都係太慢,又係想睇下有無啲集思廣益嘅地方。就係點都幫我度掂佢😂

張貼在 Blogs 閱讀更多

呢度一直荒廢咗好耐,今日突然突發奇想,不如試下用NodeBB做test field😂結果好似都唔錯咁。尤其是可以abandon垃圾php我最開心