<?xml version='1.0' encoding='utf-8' ?>

<rss version='2.0' xmlns:lj='http://www.livejournal.org/rss/lj/1.0/' xmlns:atom10='http://www.w3.org/2005/Atom'>
<channel>
  <title>Praise then darkness, and creation unfinished</title>
  <link>https://redbird.dreamwidth.org/</link>
  <description>Praise then darkness, and creation unfinished - Dreamwidth Studios</description>
  <lastBuildDate>Thu, 04 Apr 2019 12:01:50 GMT</lastBuildDate>
  <generator>LiveJournal / Dreamwidth Studios</generator>
  <lj:journal>redbird</lj:journal>
  <lj:journaltype>personal</lj:journaltype>
  

<item>
  <guid isPermaLink='true'>https://redbird.dreamwidth.org/2798857.html</guid>
  <pubDate>Thu, 04 Apr 2019 12:01:50 GMT</pubDate>
  <title>limits of de-biasing language</title>
  <link>https://redbird.dreamwidth.org/2798857.html</link>
  <description>An interesting article on the difficulties of &lt;a href=&quot;https://www.oreilly.com/ideas/de-biasing-language&amp;quot;&quot;&gt;de-biasing language&lt;/a&gt; difficulties of de-biasing language, from a machine learning viewpoint. The author notes that simple approaches can hide bias in automated systems without removing it, e.g., if an algorithm is trained on a biased dataset in which &quot;programmer&quot; clusters with words that are more often found on men&apos;s resumes, words that might be irrelevant to job qualification. At the same time, the effort is worth making; even if a completely unbiased algorithm isn&apos;t possible with current methods in a society with baked-in prejudices, a less-biased one will get better results if the goal is (say) to hire qualified programmers, or make loan decisions based on ability to repay, not on race or gender.&lt;br /&gt;&lt;br /&gt;&lt;blockquote&gt;The problem we’re facing in natural language processing (as in any application of machine learning) is that fairness is aspirational and forward looking; data can only be historical, and therefore necessarily reflects the biases and prejudices of the past. Learning how to de-bias our applications is progress, but the only real solution is to become better people. &lt;/blockquote&gt;&lt;br /&gt;&lt;br /&gt;(via Richard Mateosian, on Copyediting-L.)&lt;br /&gt;&lt;br /&gt;&lt;img src=&quot;https://www.dreamwidth.org/tools/commentcount?user=redbird&amp;ditemid=2798857&quot; width=&quot;30&quot; height=&quot;12&quot; alt=&quot;comment count unavailable&quot; style=&quot;vertical-align: middle;&quot;/&gt; comments</description>
  <comments>https://redbird.dreamwidth.org/2798857.html</comments>
  <category>language</category>
  <category>prejudice</category>
  <category>links</category>
  <category>non-gendered language</category>
  <category>algorithms</category>
  <lj:security>public</lj:security>
  <lj:reply-count>4</lj:reply-count>
</item>
</channel>
</rss>
