#cofacts
2026-01-04
Alfred chen
19:21:22
@robert79kimo has joined the channel
Alfred chen
19:21:28
問一下cofast工作會議是可以直接去還是要報名
Alfred chen
19:21:28
問一下cofast工作會議是可以直接去還是要報名
bil
23:28:28
可以直接來唷
bil
23:28:28
可以直接來唷
2026-01-05
Alfred chen
00:53:00
好👌
Alfred chen
00:53:00
好👌
Woody Shih
17:46:18
@dog850402 has joined the channel
2026-01-06
mrorz
17:25:05
HackMD
# Cofacts 會議記錄 - [搜尋](<https://cse.google.com/cse?cx=71f4f7ee215d54fe6>)[target=_blank] ## 2026 -
Alfred chen
20:00:50
話說怎麼上去
Alfred chen
20:00:50
話說怎麼上去
Alfred chen
20:01:32
:melting_face:
Alfred chen
20:01:32
:melting_face:
Alfred chen
20:02:10
感謝
Alfred chen
20:02:10
感謝
Alfred chen
20:02:33
emmm
Alfred chen
20:02:33
emmm
Alfred chen
20:02:35
hello
2026-01-08
cai
21:40:22
看到mygopen這篇
https://www.mygopen.com/2026/01/taichung.html
裡面提到有359家診所可以使用敬老愛心卡服務,但是我去翻衛生局網站看有補助50元的名單只有91家
https://www.health.taichung.gov.tw/3181762/post
所以可以用敬老愛心卡的診所不等於有補助?
https://www.mygopen.com/2026/01/taichung.html
裡面提到有359家診所可以使用敬老愛心卡服務,但是我去翻衛生局網站看有補助50元的名單只有91家
https://www.health.taichung.gov.tw/3181762/post
所以可以用敬老愛心卡的診所不等於有補助?
mrorz
2026-01-09 14:58:12
感謝 @iacmai 回報與 Charles 超迅速回應
mrorz
2026-01-09 14:58:30
偉哉 message bridge
cai
21:40:22
看到mygopen這篇
https://www.mygopen.com/2026/01/taichung.html
裡面提到有359家診所可以使用敬老愛心卡服務,但是我去翻衛生局網站看有補助50元的名單只有91家
https://www.health.taichung.gov.tw/3181762/post
所以可以用敬老愛心卡的診所不等於有補助?
https://www.mygopen.com/2026/01/taichung.html
裡面提到有359家診所可以使用敬老愛心卡服務,但是我去翻衛生局網站看有補助50元的名單只有91家
https://www.health.taichung.gov.tw/3181762/post
所以可以用敬老愛心卡的診所不等於有補助?
MyGoPen
網傳「台中敬老卡優惠」的訊息,內容聲稱台中市使用敬老愛心卡的族群,可以在 19 家醫院看診,並且折抵掛號費,並列出適用醫院的清單。經查證,網傳訊息曾是台中市政府於 2018 年「預計」推動的計畫,但從未正式實施;根據現行政策,敬老愛心卡實際適用地點為「衛生所與基層診所」,而非「地區醫院、區域醫院或醫![]()
mrorz
2026-01-09 14:58:12
感謝 @iacmai 回報與 Charles 超迅速回應
mrorz
2026-01-09 14:58:30
偉哉 message bridge
2026-01-09
mrorz
14:58:12
感謝 @iacmai 回報與 Charles 超迅速回應
mrorz
14:58:30
偉哉 message bridge
mrorz
15:25:34
15:21 收到 Cloudflare alert, 15:25 解除狀況
mrorz
2026-01-09 15:26:51
工程方向仍是把 url-resolver 搬出去
mrorz
2026-01-13 14:03:31
這篇沒被 message bridge forward 出去?
mrorz
2026-01-13 14:06:28
cc/ @pm5 bridge 好像怪怪的
好,我下班回家後看一下
好像有點麻煩。可能要明天再來處理
mrorz
2026-01-14 12:29:43
辛苦了 QQ
重開以後在 # 測試恢復了。@mrorz 再試試看吧
mrorz
15:25:34
15:21 收到 Cloudflare healthcheck alert, 15:25 解除狀況
mrorz
2026-01-09 15:26:51
工程方向仍是把 url-resolver 搬出去
mrorz
2026-01-13 14:03:31
這篇沒被 message bridge forward 出去?
mrorz
2026-01-13 14:06:28
cc/ @pm5 bridge 好像怪怪的
好,我下班回家後看一下
好像有點麻煩。可能要明天再來處理
mrorz
2026-01-14 12:29:43
辛苦了 QQ
重開以後在 # 測試恢復了。@mrorz 再試試看吧
mrorz
15:26:51
工程方向仍是把 url-resolver 搬出去
2026-01-13
mrorz
14:06:28
cc/ @pm5 bridge 好像怪怪的
mrorz
14:09:49
pm5
14:48:21
好,我下班回家後看一下
pm5
23:20:25
好像有點麻煩。可能要明天再來處理
2026-01-14
Alfred chen
08:05:16
剛剛發現了一個也還蠻猛的
Alfred chen
08:05:16
剛剛發現了一個也還蠻猛的
Alfred chen
08:07:27
歐歐沒事 是斷句有問題
Alfred chen
08:07:27
歐歐沒事 是斷句有問題
mrorz
12:29:02
我連這算不算 hallucination 都不確定
是這樣沒錯但不是這樣.jpg
是這樣沒錯但不是這樣.jpg
mrorz
12:29:43
辛苦了 QQ
2026-01-15
@null
11:31:28
好耶感謝
2026-01-16
mrorz
09:48:56
*Incident Summary: Production Server Memory Exhaustion*
Incident Duration: 2026-01-16 09:35 - 09:45 (TPE)
Status: Resolved
*Timeline*
• 09:35 - Report: User reports production server is down.
• 09:36 - Diagnosis: Investigation revealed server load at 29.43 with Memory (16GB) and Swap (4GB) both 100% full. The largest process was identified as `java` (Elasticsearch).
◦ _Source: SSH `top` command._
• 09:38 - Auto-Recovery: The operating system’s OOM Killer terminated the Elasticsearch process to recover memory. The Docker daemon immediately auto-restarted the `db` container.
◦ _Source: `dmesg` logs & `docker inspect` timestamp (09:38:28)._
• 09:44 - Manual Action: User manually restarted the `url-resolver` container to ensure a fresh state.
◦ _Source: User report & `docker ps` uptime._
• 09:45 - Resolution: Server load stabilized at 1.75. All core services (`db`, `api`, `line-bot-zh`) were confirmed up and running.
◦ _Source: SSH `uptime` & `docker-compose ps`._
*Root Cause*
Memory Exhaustion: The Elasticsearch `java` process consumed over 9GB of RAM, eventually filling both physical memory and swap space. This caused the system to thrash and become unresponsive until the OS forcibly killed the process.
*Final Status*
• Healthy: System load is normal.
• Service Uptime:
◦ `db`: ~10 minutes (Auto-recovered)
◦ `url-resolver`: ~4 minutes (Manually restarted)
• Other services: >2 weeks (Unaffected)
Incident Duration: 2026-01-16 09:35 - 09:45 (TPE)
Status: Resolved
*Timeline*
• 09:35 - Report: User reports production server is down.
• 09:36 - Diagnosis: Investigation revealed server load at 29.43 with Memory (16GB) and Swap (4GB) both 100% full. The largest process was identified as `java` (Elasticsearch).
◦ _Source: SSH `top` command._
• 09:38 - Auto-Recovery: The operating system’s OOM Killer terminated the Elasticsearch process to recover memory. The Docker daemon immediately auto-restarted the `db` container.
◦ _Source: `dmesg` logs & `docker inspect` timestamp (09:38:28)._
• 09:44 - Manual Action: User manually restarted the `url-resolver` container to ensure a fresh state.
◦ _Source: User report & `docker ps` uptime._
• 09:45 - Resolution: Server load stabilized at 1.75. All core services (`db`, `api`, `line-bot-zh`) were confirmed up and running.
◦ _Source: SSH `uptime` & `docker-compose ps`._
*Root Cause*
Memory Exhaustion: The Elasticsearch `java` process consumed over 9GB of RAM, eventually filling both physical memory and swap space. This caused the system to thrash and become unresponsive until the OS forcibly killed the process.
*Final Status*
• Healthy: System load is normal.
• Service Uptime:
◦ `db`: ~10 minutes (Auto-recovered)
◦ `url-resolver`: ~4 minutes (Manually restarted)
• Other services: >2 weeks (Unaffected)
mrorz
09:48:56
Incident Summary: Production Server Memory Exhaustion
Incident Duration: 2026-01-16 09:35 - 09:45 (TPE) Status: Resolved
*Timeline*
• 09:35 - Report: User reports production server is down.
• 09:36 - Diagnosis: Investigation revealed server load at 29.43 with Memory (16GB) and Swap (4GB) both 100% full. The largest process was identified as `java` (Elasticsearch).
◦ _Source: SSH `top` command._
• 09:38 - Auto-Recovery: The operating system’s OOM Killer terminated the Elasticsearch process to recover memory. The Docker daemon immediately auto-restarted the `db` container.
◦ _Source: `dmesg` logs & `docker inspect` timestamp (09:38:28)._
• 09:44 - Manual Action: User manually restarted the `url-resolver` container to ensure a fresh state.
◦ _Source: User report & `docker ps` uptime._
• 09:45 - Resolution: Server load stabilized at 1.75. All core services (`db`, `api`, `line-bot-zh`) were confirmed up and running.
◦ _Source: SSH `uptime` & `docker-compose ps`._
*Root Cause*
Memory Exhaustion: The Elasticsearch
`java` process consumed over 9GB of RAM, eventually filling both physical memory and swap space. This caused the system to thrash and become unresponsive until the OS forcibly killed the process.Final Status
• Healthy: System load is normal.
• Service Uptime:
◦ `db`: ~10 minutes (Auto-recovered)
◦ `url-resolver`: ~4 minutes (Manually restarted)
• Other services: >2 weeks (Unaffected)
Incident Duration: 2026-01-16 09:35 - 09:45 (TPE) Status: Resolved
*Timeline*
• 09:35 - Report: User reports production server is down.
• 09:36 - Diagnosis: Investigation revealed server load at 29.43 with Memory (16GB) and Swap (4GB) both 100% full. The largest process was identified as `java` (Elasticsearch).
◦ _Source: SSH `top` command._
• 09:38 - Auto-Recovery: The operating system’s OOM Killer terminated the Elasticsearch process to recover memory. The Docker daemon immediately auto-restarted the `db` container.
◦ _Source: `dmesg` logs & `docker inspect` timestamp (09:38:28)._
• 09:44 - Manual Action: User manually restarted the `url-resolver` container to ensure a fresh state.
◦ _Source: User report & `docker ps` uptime._
• 09:45 - Resolution: Server load stabilized at 1.75. All core services (`db`, `api`, `line-bot-zh`) were confirmed up and running.
◦ _Source: SSH `uptime` & `docker-compose ps`._
*Root Cause*
Memory Exhaustion: The Elasticsearch
`java` process consumed over 9GB of RAM, eventually filling both physical memory and swap space. This caused the system to thrash and become unresponsive until the OS forcibly killed the process.Final Status
• Healthy: System load is normal.
• Service Uptime:
◦ `db`: ~10 minutes (Auto-recovered)
◦ `url-resolver`: ~4 minutes (Manually restarted)
• Other services: >2 weeks (Unaffected)
2026-01-17
chiao3
14:01:27
@chiao3.su has joined the channel
2026-01-18
mrorz
17:02:25
OCR 好像壞掉幾天了
mrorz
2026-01-18 17:07:31
1 月初開始,圖片和影片全都沒有逐字稿,這樣比對功能應該是壞的
mrorz
2026-01-18 17:26:02
超爛,我的 env-file 裡面有幾個 key 在從 yml 複製出來的時候不小心改到變數名
就只有那幾個被改到,其他都沒有
就只有那幾個被改到,其他都沒有
mrorz
2026-01-18 17:43:30
Fixed on 1/18 17:42
mrorz
17:02:25
OCR 好像壞掉幾天了
mrorz
2026-01-18 17:07:31
1 月初開始,圖片和影片全都沒有逐字稿,這樣比對功能應該是壞的
mrorz
2026-01-18 17:26:02
超爛,我的 env-file 裡面有幾個 key 在從 yml 複製出來的時候不小心改到變數名
就只有那幾個被改到,其他都沒有
就只有那幾個被改到,其他都沒有
mrorz
2026-01-18 17:43:30
Fixed on 1/18 17:42
mrorz
17:07:31
1 月初開始,圖片和影片全都沒有逐字稿,這樣比對功能應該是壞的
mrorz
17:26:02
超爛,我的 env-file 裡面有幾個 key 在從 yml 複製出來的時候不小心改到變數名
就只有那幾個被改到,其他都沒有
就只有那幾個被改到,其他都沒有
mrorz
17:43:30
Fixed on 1/18 17:42
2026-01-19
玄米
10:29:12
@vickylee.g123 has left the channel
2026-01-20
mrorz
15:19:31
HackMD
# Cofacts 會議記錄 - [搜尋](<https://cse.google.com/cse?cx=71f4f7ee215d54fe6>)[target=_blank] ## 2026 -
2026-01-22
mrorz
01:54:05
https://github.com/cofacts/rumors-api/pull/378 test 過囉,可以 review 了
結果 test 是死在 transcript,應該是 Gemini model 默默更新導致,因此就速速做了個 fix 然後 merge 進 master 了。
結果 test 是死在 transcript,應該是 Gemini model 默默更新導致,因此就速速做了個 fix 然後 merge 進 master 了。
mrorz
2026-01-24 12:27:35
Test 又不過了,因為我把 test 改成 integration test 來抓問題 ._.
前幾天我注意到 code 寫錯導致圖片 OCR 根本不會動
然後也修正 staging 上 service account 的權限問題,所以現在 staging 上可以正常作業了
不過變成 unit test 不會動了
我想先 merge 然後在 production 補之前的逐字稿,1 月到上禮拜有些關稅相關的圖應該要做成逐字稿的 orz
前幾天我注意到 code 寫錯導致圖片 OCR 根本不會動
然後也修正 staging 上 service account 的權限問題,所以現在 staging 上可以正常作業了
不過變成 unit test 不會動了
我想先 merge 然後在 production 補之前的逐字稿,1 月到上禮拜有些關稅相關的圖應該要做成逐字稿的 orz
mrorz
01:54:05
https://github.com/cofacts/rumors-api/pull/378 test 過囉,可以 review 了
結果 test 是死在 transcript,應該是 Gemini model 默默更新導致,因此就速速做了個 fix 然後 merge 進 master 了。
結果 test 是死在 transcript,應該是 Gemini model 默默更新導致,因此就速速做了個 fix 然後 merge 進 master 了。
mrorz
2026-01-24 12:27:35
Test 又不過了,因為我把 test 改成 integration test 來抓問題 ._.
前幾天我注意到 code 寫錯導致圖片 OCR 根本不會動
然後也修正 staging 上 service account 的權限問題,所以現在 staging 上可以正常作業了
不過變成 unit test 不會動了
我想先 merge 然後在 production 補之前的逐字稿,1 月到上禮拜有些關稅相關的圖應該要做成逐字稿的 orz
前幾天我注意到 code 寫錯導致圖片 OCR 根本不會動
然後也修正 staging 上 service account 的權限問題,所以現在 staging 上可以正常作業了
不過變成 unit test 不會動了
我想先 merge 然後在 production 補之前的逐字稿,1 月到上禮拜有些關稅相關的圖應該要做成逐字稿的 orz
2026-01-24
mrorz
12:27:35
Replied to a thread: 2026-01-22 01:54:05
Test 又不過了,因為我把 test 改成 integration test 來抓問題 ._.
前幾天我注意到 code 寫錯導致圖片 OCR 根本不會動
然後也修正 staging 上 service account 的權限問題,所以現在 staging 上可以正常作業了
不過變成 unit test 不會動了
我想先 merge 然後在 production 補之前的逐字稿,1 月到上禮拜有些關稅相關的圖應該要做成逐字稿的 orz
前幾天我注意到 code 寫錯導致圖片 OCR 根本不會動
然後也修正 staging 上 service account 的權限問題,所以現在 staging 上可以正常作業了
不過變成 unit test 不會動了
我想先 merge 然後在 production 補之前的逐字稿,1 月到上禮拜有些關稅相關的圖應該要做成逐字稿的 orz
2026-01-25
A4
16:26:35
FYI 現在可以了