cofacts

Month: 2020-07

2020-07-01

github 01:41:36

Comment on #202 Convert GraphQL userArticleLinks to cursor-based pagination

Rebase and merge 看起來不會有 merge commit 呢囧 我還是用最傳統的 merge 好了,比較能從 commit 追 PR

github 01:56:59

Comment on #201 Feature/163 insert data

<https://github.com/godgunman|@godgunman> 你484改完忘記push

github 01:59:13

#206 Add google analytics to LIFF

• Google analytics setup for LIFF • sends pageview on page change • sends user timing on LIFF load • record redirect count as redirect event

github 02:14:49

Comment on #206 Add google analytics to LIFF

*Pull Request Test Coverage Report for <https://coveralls.io/builds/31779780|Build 978>* • *0* of *0* changed or added relevant lines in *0* files are covered. • No unchanged relevant lines lost coverage. • Overall coverage remained the same at *98.379%* * * * * * * *:yellow_heart: - <https://coveralls.io|Coveralls>*

github 05:25:58

Comment on #270 Hotfix/minor bugs

user id is used to identify whether the current user added the category as well (so that they can remove those categories added by themselves)(in line 298), `disableVote` may be a bit confusing. it's good to extract the author check logic to upper component though

github 05:31:03

Comment on #270 Hotfix/minor bugs

as for the mutation part it's certainly difficult to put them in appropriate place, `useMutation` is a hook and it uses closure to get required params, this means that we should put the params and `useMutation` in the same function, and at the same time consist the calling order so that it won't break hook rules

mrorz 10:26:44
本日會議紀錄

最近的會議主要會在追蹤收尾進度,以及盡量 release 已經做好且測過的東西唷
另外還有 Review 要做的新東西
https://g0v.hackmd.io/@mrorz/cofacts-meeting-notes/%2FU-KmeEYJTVmtCFB-bqWfGg
今晚大概要處理的事情有

- 追蹤結案進度
- 過趨勢圖表的 spec
- LINE bot release test
- Website release 複測 (上週 blocker 應該已經解了)
- LIFF API 變更討論
github 13:39:37

Comment on #269 Feature/reply editor toobar

1. `replyReference` does not look like a variable that is really needed. If we need this variable, please use `const` and replace `reply.text + '\n'` with `replyReference` instead. 2. Please remove unused comment. 3. Seems that we did not copy the reply reference to `ReferenceInput` yet?

啊 半夜迷糊把做到一半的 push 上去了 ==
github 13:39:37

Review on #269 Feature/reply editor toobar

The unit test looks clearer now :) Thanks! I think we are good to go after we implement reference copying on "Add this reply to my reply" button.

啊 半夜迷糊把做到一半的 push 上去了 ==
github 14:05:21

Review on #270 Hotfix/minor bugs

LGTM! I am seeing this warning when running `npm run dev`, not sure if it will cause errors: ``` Warning: fragment with name ArticleWithCategories already exists. graphql-tag enforces all fragment names across your application to be unique; read more about this in the docs: <http://dev.apollodata.com/core/fragments.html#unique-names> ``` Let's see it on staging.

github 14:26:03

#271 [Trivial] Fix trendline

*Before* <https://user-images.githubusercontent.com/108608/86210185-babaf000-bba6-11ea-9111-6c086ffce5b0.png|image> *After* <https://user-images.githubusercontent.com/108608/86210121-a37c0280-bba6-11ea-9a82-471e8e69d367.png|image>

只有一行,但修了很重要的東西 XD
登愣
github 15:30:21

Comment on #269 Feature/reply editor toobar

Thanks for adding `reference`. I think we should use `reply.reference` directly instead of composing from `reply.hyperlinks`. Sometimes the author of the reply may <https://old.cofacts.org/reply/ORLVmHEBrhVJn3LNJnmn|put some info in reply's reference field>, such as adding section title between hyperlinks, or provide description to URLs that is more concise to hyperlink titles.

現在 staging 上有新的回應 tool bar 囉,可以搜尋過去的回應加進現在正在撰寫的回應中,也可以插入 emoji~

感謝 @yanglin5689446 👏
因為 production 上會把 “add category” 按鈕藏起來,所以這個版本我還是在測完之後會先上 production~

把一個月來的 bugfix 以及 editor 先出出去
上 production 囉
Svelte 最近似乎有比較完整的 typescript 支援了,有機會用在 LIFF 上 ~

https://github.com/sveltejs/svelte/issues/4518#issuecomment-650635007
現在才看到...
加 category 的功能在 editor-toolbar 那支上面還是好的欸
我有特別修掉因為沒更新 cache 所以會更新失敗的問題
不知道為什麼 dev branch 壞了
我等等看看
delete 的部分
我不確定是不是 API 問題耶
因為我就是打 `UpdateArticleCategoryStatus` API 而已
不知道為什麼會爆開

不知道之前為什麼會 work
不過打 `CreateArticleCategory` API 的時候要回傳 user id 才會更新前端的 cache
mrorz 15:52:20
今晚大概要處理的事情有

- 追蹤結案進度
- 過趨勢圖表的 spec
- LINE bot release test
- Website release 複測 (上週 blocker 應該已經解了)
- LIFF API 變更討論
github 18:02:44

Review on #269 Feature/reply editor toobar

LGTM! Thanks for the fix, let's see it in staging :sunglasses:

github 19:03:55

#207 Feature/165 cron notify

<https://github.com/cofacts/rumors-line-bot/issues/165|#165>

github 19:06:55

Comment on #207 Feature/165 cron notify

*Pull Request Test Coverage Report for <https://coveralls.io/builds/31796635|Build 984>* • *26* of *44* *(59.09%)* changed or added relevant lines in *4* files are covered. • No unchanged relevant lines lost coverage. • Overall coverage decreased (*-4.7%*) to *93.679%* * * * * * * *:yellow_heart: - <https://coveralls.io|Coveralls>*

2020-07-02

mrorz 08:54:44
https://www.facebook.com/groups/linebot/permalink/2549701032027134/ liff 可以用 npm 載入,還有 Typescript definition

facebook.com

Richard Lee

LIFF SDK 現在也可以用 NPM 安裝了。另外安裝 NPM SDK,開發環境還有型別跟自動補完支援,寫起來效率滿分!

1

2020-07-03

mrorz 01:39:12
上 production 囉
github 13:48:20

Comment on #207 Feature/165 cron notify

If we put cron job and web server together, there may be a case that if we have multiple instances for web server, the cron job will be triggered once for each instance, causing the job be run multiple times. I would suggest we just prepare a standalone NodeJS script that can be invoked via command line. On other environments, people can use native cron tab to trigger the script; and on Heroku, we can use <https://devcenter.heroku.com/articles/scheduler|Heroku scheduler> to schedule a run.

github 13:48:20

Comment on #207 Feature/165 cron notify

Since we already use `date-fns` in our application, I think we should use <https://date-fns.org/v2.14.0/docs/add|`date-fns/add`> directly, instead of implementing our own function.

github 13:48:20

Review on #207 Feature/165 cron notify

I haven't reviewed the core logic in determine who to notify. Just to provide some thought on how cron job can be defined and how user can be notified. I love the test cases, they helped me a lot when understanding how each utility function works :nerd_face:

github 13:48:20

Comment on #207 Feature/165 cron notify

I wonder if `push` is used anywhere yet?

github 13:48:20

Comment on #207 Feature/165 cron notify

I would suggest we send flex message with button when doing multicast, so that we can hide complex LIFF URLs from users using a button with URI action.

github 13:48:20

Comment on #207 Feature/165 cron notify

If there are multiple instances running the web server, this may cause the cron job being run once for each instance / process. I would suggest we just implement a NodeJS script that can be run using command line. On normal production environment, the devOp can use ordinary cron tab to trigger the job; on our production environment on Heroku, we can use <https://devcenter.heroku.com/articles/scheduler|Heroku scheduler>. In this case, we won't need `CronJob` library; we just need to make sure the script works when run on CLI, just like the <https://github.com/cofacts/rumors-api/blob/master/src/scripts/cleanupUrls.js#L136-L138|hyperlink cleaning job> in rumors-api.

github 13:51:42

Comment on #207 Feature/165 cron notify

no, currently we just use `multicast` and `notify`.

github 23:53:58

#272 hotfix: add category bug

Add user id field to CreateArticleCategory

2020-07-04

mrorz 16:13:32
是說我在看 @ggmhttps://github.com/cofacts/rumors-line-bot/pull/201/files@acerxp511https://github.com/cofacts/rumors-line-bot/pull/207/files 時發現:
• PR207 cron job 會假設所有 `UserArticleLink` 都有 `lastViewedAt`
• PR201 則有兩種建立 `UserArticleLink` 的方式:建立全新 article 時 `UserArticleLink` 就不會有 `lastViewedAt`,但若是查到現有文章,無論該文章是否有 reply,都會建立或更新 `UserArticleLink` 並且寫入 `lastViewedAt`
我想討論的是:是否需要強制 `UserArticleLink` 一定要有 `lastViewedAt` 欄位呢? (Related spec

Note: 無論如何 `UserArticleLink.createdAt` 是一定會有而且一但寫入就不會修改的,所以不在討論範圍。

g0v.hackmd.io

rumors-line-bot 過去傳過訊息 implementation - HackMD

我沒注意到!以為 lastViewedAt 會一起 create
其實我是贊成 `UserArticleLink` 一律有 `lastViewedAt` 的,因為我看不出需要分「有 `lastViewedAt` 」與「無 `lastViewedAt`」的必要性~
嗯我覺得都可以,強制規定好像實作上會比較好做,只是語意上一點點點怪就是了
欸也不會,也可以這樣說,他送出文章的時候其實就是他看過文章了,所以時間點在那個當下也是合理
如果 `UserArticleLink` 一律有 `lastViewedAt` 的話,`UserArticleLink` 就只需要一種 create or update 的 method [註],cron job 讀取的時候也會稍微輕鬆一點,不用管 `createdAt`。

[註] 現在 `UserArticleLink` model 裡面有 3 個 method:`create` , `updateTimestamp` 跟完全沒人用的 `findOrInsertByUserIdAndArticleId` 都會 upsert `UserArticleLink`,我覺得太多了 XDD
2jo4
updateTimestamp 變得有點多餘了,因為沒有那麼多 timestamp 了 😂😂
好喔那我們就
• 第一次 insert `UserArticleLink` 的時候 `createdAt` 與 `lastViewedAt` 都要寫
• 日後僅 update `lastViewedAt`
我會用這個當結論 review 兩位的 PR
感謝討論 m(_ _)m
欸那 createdAt 好像也有點多餘
什麼時候會用到呀
噢就是純紀錄用
不會用到
但就是記著囉,第一次建立此 object 的 timestamp
有需要的話也可以 show 在 LIFF 裡
雖然有點沒空間 XD
我後來把 `updateTimestamp` 改名成 `upsertByUserIdAndArticleId` ,然後把 `findOrInsertByUserIdAndArticleId` 刪掉
我覺得 `upsertByUserIdAndArticleId` 用途比較廣
其他我都改好了,剩下一個要討論的我留著沒有按 resolve,我是覺得可以把 `find()` 搬過去 `Base` ,做為每個 Model 都與生俱來(?)的方法,這樣其實 `UserSettings` 也用得到
同意 `upsertByUserIdAndArticleId`
也支持刪掉 `updateTimestamp`
別忘了 `create` 應該也要刪掉,因為目前沒有不是 `upsertByUserIdAndArticleId` 的 create 需求

不過我看現在最新的 commit 好像還有 `updateTimestamp` ?

然後 `Base#find` 我覺得也 OK~
咦最新的 commit 已經沒有用到 `updateTimestamp` 了耶?我漏看了嗎
啊!XDD 原來我少推一個 commit 呀
使用 `create` 的話,你是在說這段嗎?

https://github.com/cofacts/rumors-line-bot/blob/a3cda30eb414100405b1c3fd9e492308ddff8d34/src/webhook/handlers/askingArticleSubmissionConsent.js#L58

我覺得這段用 `create` 沒什麼問題呀,使用者創建的新文章 `CreateArticle.id` 一定會是新的不會重複的,一定會產生出一筆 `UserArticleLink` 所以用 `create` 很合理吧?
你是想把這段換成 `upsertByUserIdAndArticleId` 嗎?這樣雖然是可以,但是意圖會有點奇怪會混淆吧,upsert 的用意還是以 update 為主,你想要用來全面取代 create 嗎?


我以為「所有 user article link 一定會寫入 `lastViewedAt` 」是上一段討論的共識?

如果「所有 user article link 一定會寫入 `lastViewedAt` 」那就代表 create 跟 upsert 做的事情幾乎快一樣了>
至於 upsert 的語意問題

這就是為什麼我 rumors-api 類似的 API 會使用 “CreateOrUpdate” 做 API 開頭,而不是用比較常見但確實比較看重 update 的 upsert XD
嗯嗯,一定會寫入 `lastViewedAt` 是有共識的。但這跟所有的操作都用 `upsert` 應該是兩回事?我所謂的意圖有點混淆是指
當使用 `create` 的時候代表,這筆 `UserArticleLink` 一定是新的一筆出現
當使用 `upsert` 的時候代表,這筆 `UserArticleLink` 可能以前存在過,後來被刪掉,就像是我們再回溯資料的時候
但 view article 的 case 不是「以前存在過後來被刪掉」,使用者可能過去看過,也可能從來沒看過
卻也使用 `upsertByUserIdAndArticleId`
如果在意語意的話,我會建議用 createOrUpdate 代替 upsert
噢對啦,我要表達的意思就是,`upsert` 的用途是於我們不確定這筆東西在不在,不在的時候我們要插一筆,就我們的狀況是,以前曾經看過而沒有被記錄起來
如果只是單純做 `create` 那我覺得應該就用 `create` ,而不要用 `upsertByUserIdAndArticleId` ,因為 `upsertByUserIdAndArticleId` 隱含了更多的意思
我講一下我覺得現在這兩種狀況:
1. 若 `create` 和 `upsertByUserIdAndArticleId` 都同時存在,好處是開發者在 `askingArticleSubmissionConsent` 裡面會知道,`UserArticleLink` 在這裡是第一次創建,而在 `choosingArticle` 裡面會知道, `UserArticleLink` 以前可能存在過,
2. 若統一使用 `upsertByUserIdAndArticleId` 則少了這個訊息,然後 code 比較簡短
我是覺得該 `create` 就 `create` ,該 `update` 就 `update` ,如果要 atomic 的操作 update + create 就做 `upsert`
嗯⋯⋯我想了想還是照你說的改好了,我想說你在別的地方可能也是這樣實作(譬如 API 之類的),那這樣我們整體的設計會比較整齊
Thanks. Did you push? @ggm
推哩 也 rebase 哩
我還是有保留 `UserSettings.create` 和 `UserArticleLink.create` 他們在 test 裡面都還有被用到,也不能被 createOrUpdate 取代
😮 2
github 16:36:23

Comment on #201 Feature/163 insert data

Why don't we just use mongo client for a find query? (Also for L118) The main point of my previous comment was because `UserSetting.findOrInsertByUserId` will mutate the database to make the test pass, even when the function under test (`singleUserHandler`) does not work as expected. It did not mean we should avoid using `mongoClient` entirely ._.

github 16:36:23

Review on #201 Feature/163 insert data

Thanks for updating the PR accordingly! Named a few suggestions and added conclusion from the discussion <https://g0v-tw.slack.com/archives/C2PPMRQGP/p1593850412237300|on Slack> today.

github 16:36:23

Comment on #201 Feature/163 insert data

There is already a static method called `find` in `UserArticleLink`, which is used by GraphQL resolver for `Query.userArticleLinks`. `find` was added before this branch previously branched out and is now in the file after your rebase. I think we should only leave one endpoint that lists user-article links of a user. I like the naming `findByUserId` because it is more clear (and considering that we need also to find by article id in <https://github.com/cofacts/rumors-line-bot/pull/207|#207>). But the pagination mechanism is also required by the GraphQL API. Would you merge the implementation of the two find methods into `findByUserId()`, remove `find()` and update the resolver (`graphql/resolvers/Query.js`) accordingly?

github 16:36:23

Comment on #201 Feature/163 insert data

Since `choosingReply` does not involve `UserArticleLink`, I think we should not include these in test file, or it may confuse readers thinking that `choosingReply` has used `UserArticleLink`.

github 16:36:23

Comment on #201 Feature/163 insert data

As discussed in slack today, we should use an API that also writes `lastViewedAt`. Also, please add `lastViewedAt` as a required field in `userArticleLink.json` to ensure that `lastViewedAt` always exists in DB.

github 16:36:23

Comment on #201 Feature/163 insert data

Actually there is another method called `findOrInsertByUserIdAndArticleId` in `UserArticleLink` model, doing almost the same thing as `updateTimestamps` does, but it's not used anywhere. I suggest we should leave only one of such method in `UserArticleLink`.

github 18:32:07

Comment on #207 Feature/165 cron notify

1. Accepting inputting multiple `articleIds` and search by `{articleId: {$in: articleIds}}` can provide better flexibility. It should work more efficiently as well. 2. Since this API still returns `UserArticleLink`s instead of users, the `UserList` in its naming is a bit confusing. suggest using something like `findByArticleIds`.

github 18:32:07

Comment on #207 Feature/165 cron notify

According to <https://g0v-tw.slack.com/archives/C2PPMRQGP/p1593850412237300|our discussion on slack>, all `userArticleLink` should have `lastViewedAt`. We may need to update these fixtures to match that.

github 18:32:07

Comment on #207 Feature/165 cron notify

I suggest we move some of the `articleReply`'s `createdAt` outside of the queried time range because • `ListArticle(repliedAt)` includes an article as long as it has _one_ `articleReply` within the time range, thus it is possible for its results having _some_ article-replies outside the time range • Including such possibility in the test case can better cover all scenarios • Currently there are already plenty of `articleReplies` in the fixture, but all of them in the same time range. Moving some of them outside the time range can increase diversity of fixture combinations.

github 18:32:07

Review on #207 Feature/165 cron notify

I have viewed the logic of cron jobs. Thanks for the contribution! As our <https://datastudio.google.com/u/0/reporting/18J8jZYumsoaCPBk9bdRd97GKvi_W5v-r/page/ckUQ|open data analytics> shows, sometimes there will be more than 100 articles replied in a day. I have made some suggestions to enhance robustness of cron jobs by processing in batches. Also there are some suggestions about diversity of test fixtures.

github 18:32:07

Comment on #207 Feature/165 cron notify

The combination of `Promise.all` and `....map()` will cause `UserSettings.findOrInsertByUserId` being invoked in parallel. If there are `N` users to notify, this will trigger `N` queries to database _at once_. This can be dangerous when a very popular article (&gt;100 user article links) receives a new reply. I would suggest • Divide `Object.keys(notificationList)` (user ids) into batches, size of each batch controlled by a constant. • Use for-loop with `await` in loop to make sure only 1 batch is processed at a time. • Create a new static method on `UserSettings` that reads user settings in batch and does not perform upsert. • use `findAll({userId: {$in: userIds}})` in the new method to receive batch of user settings • there is no point inserting `UserSettings` in cron jobs • skip a user when the user does not have `UserSettings` (it should never happen though, just in case)

github 18:32:07

Comment on #207 Feature/165 cron notify

Sending one notification for each user that receive notification may <https://developers.google.com/analytics/devguides/collection/analyticsjs/limits-quotas|hit this quota> if there are more than 20 users to send notification: &gt; Each gtag.js and analytics.js tracker object starts with 20 hits that are replenished at a rate of 2 hits per second I think for now we can just send one event to GA that records the number of notifications we sent this time (in <https://developers.google.com/analytics/devguides/collection/analyticsjs/events|event value> so that the number adds up when query for multiple days).

github 18:32:07

Comment on #207 Feature/165 cron notify

These comments will be sent as part of GraphQL. I think we can remove these comments because the variable name already implies the same meaning.

github 18:32:07

Comment on #207 Feature/165 cron notify

Suggest adding `status:NORMAL` to exclude deleted replies

github 18:32:08

Comment on #207 Feature/165 cron notify

For processing the articles, I would suggest 1. Use pagination to get _all_ articles that match the filter, not just first `N` articles. 2. Sequentially process each batch of `N` articles. Design APIs so that it can handle multiple articles at once. ``` // Async generator that gets a batch of articles with articleReply between `from` and `to`. // The generator encapsulates complex pagination logic so that the function using it can focus on // batch processing logic without worrying pagination. // async function* getArticlesInBatch(from, to) { // Get pageInfo outside the loop since it's expensive for rumors-api const { data: { ListArticles: { pageInfo: {lastCursor} } } } = await gql`...`({from, to}); let after = undefined; while(lastCursor !== after) { // Actually loads `edges` and process. const { data: { ListArticles } } = await gql`...`({from, to, after}); yield ListArticles.edges.map(({node}) =&gt; node); // next gql call should go after the last cursor of this page after = ListArticles.edges[ListArticles.edges.length - 1].cursor; } } async function getNotificationList(lastScannedAt, nowWithOffset) { const result = {}; // for loop ensures that only one batch will be processed at a time, so that we do not // make a bunch of queries to our MongoDB at once. // for await (const articles of getArticlesInBatch(lastScannedAt, nowWithOffset)) { // Process the batch // Make an API that processes a batch of articleIds, instead of only one ID at a time const userArticleLinks = await UserArticleLink.findByArticleIds(articles.map(({id}) =&gt; id)); userArticleLinks.forEach(/* Logic that populates result */) } return result; } ```

github 18:39:32

Comment on #201 Feature/163 insert data

Should we test if `unfollow` event changes `allowNewReplyUpdate` to false?

github 19:05:06

Comment on #272 hotfix: add category bug

Still can't remove a category added by myself from the article detail page. The category first seems to deleted, but re-appear after refresh. I am seeing this warning in console. Not sure if it is relevant. &gt; Warning: fragment with name ArticleWithCategories already exists. &gt; graphql-tag enforces all fragment names across your application to be unique; read more about &gt; this in the docs: <http://dev.apollodata.com/core/fragments.html#unique-names|http://dev.apollodata.com/core/fragments.html#unique-names> I found that • there is no `status` exist in fetched `articleCategories` in apollo cache. • when querying article's `articleCategories` we did not specify `(status: NORMAL)` in GraphQL. I guess that's probably why all `articleCategories` are fetched and displayed. Since the display logic actually do not involve `status`, I suggest specifying `(status: NORMAL)` when querying `articleCategories` in article detail page.

2020-07-05

github 02:34:11

Comment on #201 Feature/163 insert data

Hmm, because you say `using MongoDb commands` :joy::joy: . It's a bit confusing for me, but doing mongo CLI is also quick, then I made it haha

github 03:35:50

Comment on #201 Feature/163 insert data

How about we move `find()` into `src/database/models/base.js` ? So that all models can use `find()` just like they already have `findOneAndUpdate`. As a result, `findByUserId()` is for listing user-article links in UserArticleLink, and `find()` is for general use in all models.

ggm 05:30:50
我後來把 `updateTimestamp` 改名成 `upsertByUserIdAndArticleId` ,然後把 `findOrInsertByUserIdAndArticleId` 刪掉
github 10:59:12

Comment on #201 Feature/163 insert data

But mongodb already have a method named `find()`. I would like to see `findXXX` methods directly invoking methods of mongodb client. Adding one more layer to `find()` is a bit over-engineered to me.

(這個 comment 後來從 github 刪掉囉)
github 11:10:22

Comment on #201 Feature/163 insert data

Agree adding `find()` to `Base` so that `Base` API is more consistent :woman-gesturing-ok:

ggm 12:53:18
是說我昨天犯蠢,我不小心 `git rebase master` 推上去,但我後來又有 `git rebase dev` 推回去,我有檢查一遍應該是沒有改壞,如果看到哪裡有怪異現象(?)可能就是我 rebase 沒改好
我看線圖應該是對的唷
github 17:05:59

Review on #272 hotfix: add category bug

It's working properly now. Thanks for the fix!

github 17:24:38

#179 ClaimReview search API

Discussion: <https://g0v.hackmd.io/@mrorz/cofacts-meeting-notes/%2F%40mrorz%2FByKAmKITU|https://g0v.hackmd.io/@mrorz/cofacts-meeting-notes/%2F%40mrorz%2FByKAmKITU> &gt; orz: Google 有針對 ClaimReview 做一個專門的 search box 耶:<https://toolbox.google.com/factcheck/explorer|https://toolbox.google.com/factcheck/explorer> (預設只會顯示瀏覽器語言的 fact-check result) &gt; 好像也有 Search API: <https://toolbox.google.com/factcheck/apis|https://toolbox.google.com/factcheck/apis> &gt; Lucien: 要不要做在 API server &gt; 就能查完直接送進 DB &gt; 或者轉換為一種 reply

github 17:35:00

#273 Revamp deleted reply list in article page

Spec: <https://g0v.hackmd.io/dYT7zCPGQiKsTne7-kkybw|https://g0v.hackmd.io/dYT7zCPGQiKsTne7-kkybw> Directions: only show deleted article-reply to its author

mrorz 17:39:17
接下來我會把 Cofacts Next 上一階段的未竟事項開成票放在 issue 裡頭追蹤。

這裏希望麻煩 @stbb1025 把之前這份投影片裡提到的 enhancement 中,production 上已經達成的部份做上記號(綠色勾勾之類的),這樣比較方便我整理還沒做的東西 >“< 感謝感謝
https://docs.google.com/presentation/d/18VnEBMr9m-t81ppRwHcjbA1keltg-CIn7wUF9pu1oLo/edit
也麻煩 @yanglin5689446 更新一下這裡唷
https://github.com/orgs/cofacts/projects/5

因為 note 沒有標 issue / PR 所以我不太確定哪些該移動 >“<

在你更新完之後,我再將 Todo item 開成票~
好👌以主站的狀態為主嗎?
mrorz