Hello. Looking forward to forward to meeting everyone in May.
1. 與你有關的三個關鍵字 / Give us three keywords about yourself: human rights, current affairs, pop culture
2. 怎麼知道 g0v 的 / How did you know about g0v? Not sure where I first learnt about g0v but have been following the community's work from afar for the past 2-3 years. Big fan of cofacts.
3. 平常從事的職業或熟悉的領域、專長/ What do you do? I am the digital rights and open tech manager at
EngageMedia. My current work deals with localizing digital security resources for Southeast Asian HRDs, researching internet interference in Indonesia and the Philippines, and building community resilience against online misinformation, amongst others.
4. 有興趣的議題 / Issues you care about: Decolonising human rights movements, human rights tech, and regulation of digital spaces
5. 怎麼聯絡 / Leave your contact here:
khairil@engagemedia.org /
kh@rlzh.fr
6. 想說的話或想做的事 / Anything you want to say: I will be presenting at a session on a prototype for an online misinfo management system for fact-checking teams. Would love to connect with folks working on similar issues.
Hi, @mrorz here. Looking forward to seeing you in g0v summit!
(Despite the fact that the time of our sessions conflict 😛)
@slack1053 Regarding the video and image messages, we did some research when we implement the video and image indexing on Cofacts.
Tech notes here:
https://g0v.hackmd.io/@cofacts/rd/%2FaJqHn8f5QGuBDLSMH_EinA
(We also have tech notes on other topics at
https://g0v.hackmd.io/@cofacts/rd/ )
Despite the fact that the tech note mentioned a lot about multimedia indexing, we haven’t really implement anything other than simple hashes. What we found most useful is traditional OCR and transcripts.
From the messages we collected, many of which are image with text and video with voice-overs, so OCR and transcripts by Whisper is a very important infotags that helps us identify reoccurring messages that is published in different formats.
Oh wow this is really helpful. I didn’t think of some of these things 😝 thank you!
2