Electronic Theses and Dissertations (Masters/MBA)
Permanent URI for this collectionhttps://hdl.handle.net/10539/37942
Browse
2 results
Search Results
Item A thematic synthesis of ethics principles in artificial intelligence(University of the Witwatersrand, Johannesburg, 2024) Oberholzer , JoannaIn an era marked by rapid advancements in artificial intelligence (AI), the ethical dimensions of AI development and deployment have become increasingly pivotal. As AI technologies permeate diverse sectors, the need for a comprehensive understanding of the ethical principles governing their use has intensified. This research employs reflective thematic analysis to scrutinise the ethical landscape of AI, to discern consensus among stakeholders and evaluate the practicality of implementing ethical principles. Leveraging the critical-systems-heuristics framework, the study explores implicit assumptions, power dynamics, and contextual intricacies for a nuanced analysis. Data from 156 entities form the basis for a qualitative thematic synthesis, revealing motivations, control mechanisms, knowledge sources, and legitimacy factors guiding AI-ethical principles. Key findings spotlight the prevalence of ethics documents in the private sector, driven by market competition, corporate social responsibility, regulatory compliance, and stakeholder expectations. Europe and North America have emerged as leaders in document publication, reflecting their technological prowess. Government agencies uniquely emphasise transparency. Variations in prioritised principles across stakeholders unveil distinct motivations aligned with organisational goals. Challenges impeding AI-ethics implementation encompass vague principles, global regulatory disparities, data-privacy concerns, and resource limitations. The study unravels worldviews which shape AI ethics, with private organisations valuing human-centricity, accountability, and legitimacy through representation and consensus. The outcomes contribute theoretical insights and practical recommendations, guiding the responsible development of AI technologies.Item Bias in data used to train salesbased decision-making algorithms in a South African retail bank(2021) Wong, AliceBanks are increasingly using algorithms to drive informed and automated decision-making. Due to algorithms being reliant on training data for the model to learn the correct outcome, banks must ensure that the customer data is securely and fairly used when creating product offerings as there is a risk of perpetuating intentional and unintentional bias. This bias can result from unrepresentative and incomplete training data or inherently biased data due to past social inequalities. This study aimed to understand the potential bias found in the training data used to train sales-based decision-making algorithms used by South African retail banks to create customer product offerings. The research adopted a qualitative approach and was conducted through ten virtual one-on-one interviews with semi-structured questions. Purposive sampling was used to select banking professionals from data science teams in a particular South African retail bank across demographics and levels of seniority. The data collected from the participants in the interviews were then thematically analysed to draw a conclusion based on the findings. Key findings included: An inconsistent understanding across data science teams in a South African retail bank around the prohibition of using the gender variable. This could result in certain developers using proxy variables for gender to inform certain product offerings. A potential gap in terms of the potential usage of proxy variables for disability (due to non-collection of this demographic attribute) to inform certain product offerings. Although disability was not identified as a known biased variable, it did, however, raise the question of whether banks should be collecting the customer’s disability data and doing more in terms of social responsibility to address social inequalities and enable disabled individuals to contribute as effectively as abled individuals. As algorithms tend to generalise based on the majority’s requirements, this would result in a higher error rate of underrepresented groups of individuals or minority groups. This could result in financial exclusion or incorrect products being offered to certain groups of customers iii which, if not corrected, would lead to the continued subordination of certain groups of customers based on demographic attributes.