AI, Privacy, and User Control with Paul Harrison

In this episode, Corey Quinn is joined by Senior Security Engineering Lead at Mattermost Paul Harrison in a discussion on the often-overlooked ethical implications of artificial intelligence in technology. They discuss how the rapid adoption of AI technologies might compromise user privacy and consent, reflecting on instances where companies may prioritize innovation at the expense of these core values. Their conversation highlights Mattermost's dedication to data privacy and user control, positioning the company as a privacy-centric alternative in the tech landscape.

Show Highlights: 

(00:00) Introduction to the episode 

(01:50) How companies compromise privacy in the rush to adopt AI

(04:10) What is Mattermost? Paul explains the self-hostable, privacy-focused 
communication platform

(06:00) The evolution of chat platforms and Mattermost's unique position compared to Slack

(10:01) Paul elaborates on how Mattermost enables user control over data and customization

(14:23) Discuss the implications of integrating AI in everyday applications and its challenges

(20:35) AI’s potential risks and unintended consequences, particularly in data management and security

(25:14) Paul and Corey critique tech companies’ approach to AI and data privacy

(28:59) Closing remarks and where to find more information about Paul Harrison and Mattermost


About Paul:

Paul Harrison is a Senior Security Engineering Lead at Mattermost, responsible for their Security Operations team. Prior to this he led Security Operations at GitLib, and several other emerging tech companies. Paul has specialized in building security operations and infrastructure security programs, enabling companies to have a secure footing as they grow. 

Links Referenced:


*Sponsor

Join our newsletter

checkmark Got it. You're on the list!
Want to sponsor the podcast? Send me an email.

2021 Duckbill Group, LLC