Governing AI in Public HRM: A Critical Analysis of Taiwan’s Draft Artificial Intelligence Basic Law

Authors

  • Yu-Sheng Yang Institute of Political Economy, College of Social Sciences, National Cheng Kung University, Taiwan

DOI:

https://doi.org/10.18196/jgpp.v12i3.26878

Keywords:

Public human resource management, Data privacy, Algorithmic fairness, Decision transparency, Accountability

Abstract

This study evaluates the suitability of the Draft Artificial Intelligence Basic Law (2024) for public human resource management (PHRM) in Taiwan, focusing on data privacy, algorithmic fairness, decision transparency, and accountability. PHRM involves recruitment, evaluation, and appointment processes that extensively apply personal data and algorithms, entailing significant legal and ethical risks. Using qualitative methods, this study compares Taiwan’s approach with the EU’s risk-based and the US’s market-driven models. Triangulation and institutional analysis are employed to assess the draft’s provisions on legitimacy, fairness, and accountability. Findings showed the draft omits key rights such as data portability, the right to be forgotten, and data protection impact assessments (DPIA), and lacks algorithm audits, disclosure, and appeal mechanisms. These gaps may lead to bias, opacity, and violations of rights, with risks amplified under conditions of regulatory flexibility. The novelty of this study lies in its integration of AI governance with the specific context of public human resource management in Taiwan, an area where legal-ethical risks are high but underexplored in existing literature. Unlike prior research that mainly addresses AI governance in commercial or general administrative domains, this study highlights how the unique features of PHRM—such as recruitment algorithms and performance evaluation systems—intersect with data rights and accountability requirements. By situating the Draft AI Law within this sensitive policy arena, the study extends ICT adoption theories beyond traditional models emphasizing usefulness and ease of use, foregrounding public values, ethical safeguards, and institutional legitimacy. From a policy perspective, this study recommends strengthening data rights, establishing compliance and audit systems, creating independent regulatory bodies, and implementing disclosure requirements, thereby providing both theoretical and practical insights for AI governance in Taiwan and the broader region.

References

Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645

Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and Machine Learning: Limitations and Opportunities. The MIT Press.

Binns, R. (2018). Fairness in Machine Learning: Lessons from Political Philosophy. In Proceedings of Machine Learning Research (Vol. 81, pp. 149–159). PMLR.

Brkan, M. (2019). Do algorithms rule the world? Algorithmic decision-making and data protection in the framework of the GDPR and beyond. International Journal of Law and Information Technology, 27(2), 91–121. https://doi.org/10.1093/ijlit/eay017

Brynjolfsson, E., & McAfee, A. (2017). The business of artificial intelligence. Harvard Business Review, 7(1), 1-2.

Bryson, J. J. (2020). The Artificial Intelligence of the Ethics of Artificial Intelligence. In The Oxford Handbook of Ethics of AI (pp. 2–25). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.1

Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1). https://doi.org/10.1177/2053951715622512

Bygrave, L. A. (2017). Data Protection by Design and by Default : Deciphering the EU’s Legislative Requirements. Oslo Law Review, 4(2), 105–120. https://doi.org/10.18261/issn.2387-3299-2017-02-03

Calo, R. (2017). Artificial Intelligence Policy: A Roadmap. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3015350

Cordella, A., & Bonina, C. M. (2012). A public value perspective for ICT enabled public sector reforms: A theoretical reflection. Government Information Quarterly, 29(4), 512–520. https://doi.org/10.1016/j.giq.2012.03.004

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. https://doi.org/10.2307/j.ctv1ghv45t

Creswell, J. W., & Creswell, J. D. (2018). Research design qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE Publications.

Dastin, J. (2022). Amazon Scraps Secret AI Recruiting Tool that Showed Bias against Women. Ethics of Data and Analytics, 296–299. https://doi.org/10.1201/9781003278290-44

Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319. https://doi.org/10.2307/249008

de Fine Licht, K., & de Fine Licht, J. (2020). Artificial intelligence, transparency, and public decision-making. AI & SOCIETY, 35(4), 917–926. https://doi.org/10.1007/s00146-020-00960-w

Denzin, N. K., & Lincoln, Y. S. (2017). The Sage Handbook of Qualitative Research (5th ed.). SAGE Publications.

Doshi-Velez, F., & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. http://arxiv.org/abs/1702.08608

Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.

European Commission. (2021). Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts (COM (2021) 206 final).

Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2015). Certifying and Removing Disparate Impact. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 259–268. https://doi.org/10.1145/2783258.2783311

Felzmann, H., Villaronga, E. F., Lutz, C., & Tamò-Larrieux, A. (2019). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1). https://doi.org/10.1177/2053951719860542

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3518482

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5

Friedewald, M., Schiering, I., Martin, N., & Hallinan, D. (2022). Data Protection Impact Assessments in Practice (pp. 424–443). https://doi.org/10.1007/978-3-030-95484-0_25

Fuster, G. (2020). The emergence of data minimization as a legal principle for AI regulation. European Journal of Risk Regulation, 11(3), 457-472.

Gellert, R. (2019). Data portability and data control: Democratic aspects of the GDPR. Computer Law & Security Review, 35(2), 163-174.

Gellert, R. (2021). Understanding data minimization in the age of AI: Challenges and policy recommendations. International Data Privacy Law, 11(2), 85-102.

Gomez, J. (2018). The right to be forgotten: Enforcement and compliance. Journal of Information Policy, 8, 1–23.

Han, T. A., Pereira, L. M., Lenaerts, T., & Santos, F. C. (2021). Mediating artificial intelligence developments through negative and positive incentives. PLOS ONE, 16(1), e0244592. https://doi.org/10.1371/journal.pone.0244592

Hilliard, A., Gulley, A., Koshiyama, A., & Kazim, E. (2024). Bias audit laws: how effective are they at preventing bias in automated employment decision tools? International Review of Law, Computers & Technology, 1–17. https://doi.org/10.1080/13600869.2024.2403053

Hintze, M. (2018). Viewing the GDPR through a de-identification lens: a tool for compliance, clarification, and consistency. International Data Privacy Law, 8(1), 86–101. https://doi.org/10.1093/idpl/ipx020

Holstein, K., Wortman Vaughan, J., Daumé, H., Dudik, M., & Wallach, H. (2019). Improving Fairness in Machine Learning Systems. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3290605.3300830

Janssen, M. (2025). Responsible governance of generative AI: conceptualizing GenAI as complex adaptive systems. Policy and Society, 44(1), 38–51. https://doi.org/10.1093/polsoc/puae040

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Jørgensen, T. B., & Bozeman, B. (2007). Public Values. Administration & Society, 39(3), 354–381. https://doi.org/10.1177/0095399707300703

Kamarinou, D., Millard, C., & Singh, J. (2016). Machine Learning with Personal Data. Communications of the ACM, 55(10), 78–87. https://doi.org/10.1145/2347736.2347755

Kuner, C., Bygrave, L. A., Docksey, C., Drechsler, L., & Tosoni, L. (2021). The EU General Data Protection Regulation: A Commentary/Update of Selected Articles. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3839645

Mattoo, A., & Meltzer, J. P. (2019). International Data Flows and Privacy: The Conflict and Its Resolution. Journal of International Economic Law, 21(4), 769–789. https://doi.org/10.1093/jiel/jgy044

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2022). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10.1145/3457607

Miles, M. B., & Huberman, A. M. (1994). Qualitative Data Analysis. In Sage publications (2nd ed.).

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2). https://doi.org/10.1177/2053951716679679

Mökander, J. (2023). Auditing of AI: Legal, Ethical and Technical Approaches. Digital Society, 2(3), 49. https://doi.org/10.1007/s44206-023-00074-y

Pasquale, F. (2015). The Black Box Society. Harvard University Press. https://doi.org/10.4159/harvard.9780674736061

Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 469–481. https://doi.org/10.1145/3351095.3372828

Raji, I. D., & Buolamwini, J. (2023). Actionable Auditing Revisited. Communications of the ACM, 66(1), 101–108. https://doi.org/10.1145/3571151

Raji, I. D., Xu, P., Honigsberg, C., & Ho, D. (2022). Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 557–571. https://doi.org/10.1145/3514094.3534181

Richards, N., & Hartzog, W. (2018). The Pathologies of Digital Consent. Washington University Law Review, 96(6), 1461–1503.

Solove, D. J. (2021). The Myth of the Privacy Paradox. George Washington Law Review, 89(1), 1-51.

Suresh, H., & Guttag, J. (2021). A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. Equity and Access in Algorithms, Mechanisms, and Optimization, 1–9. https://doi.org/10.1145/3465416.3483305

Veale, M., & Binns, R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society, 4(2), 205395171774353. https://doi.org/10.1177/2053951717743530

Voigt, P., & von dem Bussche, A. (2017). The EU General Data Protection Regulation (GDPR). Springer International Publishing. https://doi.org/10.1007/978-3-319-57959-7

Wagner, B. (2019). Ethics As An Escape From Regulation. From “Ethics-Washing” To Ethics-Shopping? In BEING PROFILED (pp. 84–89). Amsterdam University Press. https://doi.org/10.1515/9789048550180-016

Weller, A. (2019). Transparency: Motivations and Challenges (pp. 23–40). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-28954-6_2

Wirtz, B. W., Weyerer, J. C., & Geyer, C. (2019). Artificial Intelligence and the Public Sector—Applications and Challenges. International Journal of Public Administration, 42(7), 596–615. https://doi.org/10.1080/01900692.2018.1498103

Wirtz, B. W., Weyerer, J. C., & Sturm, B. J. (2020). The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration. International Journal of Public Administration, 43(9), 818–829. https://doi.org/10.1080/01900692.2020.1749851

Wright, D., & De Hert, P. (2012). Introduction to Privacy Impact Assessment (pp. 3–32). https://doi.org/10.1007/978-94-007-2543-0_1

Wuttke, A., Rauchfleisch, A., & Jungherr, A. (2025). Artificial Intelligence in Government: Why People Feel They Lose Control. https://arxiv.org/abs/2505.01085v1

Downloads

Published

2025-10-03

How to Cite

Yang, Y.-S. (2025). Governing AI in Public HRM: A Critical Analysis of Taiwan’s Draft Artificial Intelligence Basic Law. Journal of Governance and Public Policy, 12(3), 250–263. https://doi.org/10.18196/jgpp.v12i3.26878