There is no better feeling than sitting back at your desk after a good conference and feeling energized from the learning sessions, connections with industry friends and partners, and the memories of all the receptions. And then the realization hits that you need to put what you learned into action.
Of course, just like any other conference I’ve attended in the past few years, the biggest topic of conversation was AI (Artificial Intelligence).
From keynote sessions to casual hallway conversations, AI dominated the agenda, highlighting both its potential and the challenges it poses for associations. While the excitement around AI is everywhere, there are different adoption levels (both intentional and not) and significant concerns – particularly around ethics, policies, and the responsible use of these tools.
AI is everywhere! And will help with the age-old issue of “Doing More with Less”
Let’s be clear, AI is becoming part of every organization’s operations. The technology is now embedded in a wide range of tools that we all use daily, from your word processing (how many of you are using Grammarly or have activated Microsoft Copilot in your Microsoft Office accounts?) to data analysis such as Google Analytics’ Insights to the growing adoption of generative AI tools such as ChatGPT, Gemini and more.
However, as associations respond to the traditional “do more with less” mantra, AI also becomes a tool to help with that. With more channels to market in, more products and events to promote, and a growing demand for personalization, AI offers a way to manage these increasing demands without stretching resources too thin.
When applied smartly, AI can help associations streamline operations, enhance member engagement, and improve decision-making processes. For example, AI can automate routine tasks, freeing up staff to focus on higher-value activities. It can also analyze large datasets to uncover insights that might be missed by human analysts, helping associations make more informed decisions.
AI needs to be a tactical execution tied to an organization’s strategic objectives
For those of us that have been around technology for a while, AI might feel like the latest version of “Shiny Object Syndrome” as boards and leaders rush into the space and vendors offer new tools and features.
However, the key to leveraging AI effectively lies in its alignment with the organization’s strategic objectives. It’s not about adopting AI for the sake of it but about understanding how it can support your mission and enhance your services. This requires careful planning, clear goals, and a willingness to adapt as the technology evolves.
For example:
- Are you trying to increase content production in an increasingly noisy marketplace? Consider the use of generative AI.
- Are you trying to benchmark industry data and help members understand how they are performing in a larger, more complex market? Look at machine learning based analytics.
- Are you trying to optimize operations and service? Consider AI powered automation or AI based chat.
But the first part of these are all tied to a strategic objective where AI is the tool to complete the task.
Ethics and Transparency: Growing Concerns for Associations
Perhaps the biggest concerns raised focused on Transparency and Ethics – which are key tenets of why people turn to associations in the first place.
First, associations need to be clear about how and why they are using AI. For example, if an organization is using generative AI to draft content (such as I did to create the first draft of this blog post), it should be transparent about this process (see what I did there?). Or if AI is being used to create meeting summaries, then participants need to know that they are being recorded and their comments are part of the AI meeting summary.
However, the biggest potential blind spots that associations could face center around ethical concerns. For example, AI presents ethics risks that are unique to associations in terms of:
- Misinformation – Is the information accurate? Or is there bad content and context created by AI hallucinations or bad data?
- Bias and Discrimination – Are the AI tools acting ethically and eliminating bias and discrimination? For example, if you are using AI to screen session abstracts or speaking submissions, is it set up in a way to examine content on the quality rather than the name or background (e.g. gender, ethnicity, etc) of the submitter?
- Copyright and Intellectual Property – Has the organization unknowingly published content it doesn’t have rights to or accidentally given up its own IP by feeding content into a public AI tool?
- Privacy and Data Security – Is the association accidentally exposing proprietary or confidential information? (In fact, this is an issue we ran into while testing out some AI tools here at Yoko Co.)
- Accountability – since an association is supposed to be the expert on a specific topic, has a trusted human reviewed and approved the content? Or was that left up to the AI?
Another consideration is the fact that associations rely heavily on volunteers, including industry experts and chapter leaders. These valuable volunteers contribute insights and expertise, but they may not always be fully aware of or compliant with the organization’s AI policies. This raises important questions:
- Are volunteers following the guidelines?
- Are they aware of the ethical implications of using AI tools?
- Is the content accurate and being reviewed by a human?
- Does the AI-generated content violate copyright?
The good news is that the ASAE Ethics Committee will be partnering with other committees (including the ASAE Technology Advisory Council) to help create some policies and guidelines. More to come soon.
Policies: Not Just a One-Time Exercise
Speaking of policies, conference discussions also focused on the fact that associations need to develop robust AI policies and guidelines.
But it’s not enough to simply draft a policy and leave it at that. These policies need to be living documents that are regularly reviewed, updated, and communicated to all stakeholders. This means going beyond the typical 400-page HR manuals that employees skim through when starting a job. Instead, associations should ensure that their AI policies are accessible, understandable, and regularly reinforced through training and updates.
Moreover, it’s crucial for these policies to address the dynamic nature of AI tools. For instance, the terms of service for AI-powered platforms can change frequently, sometimes with significant implications (such as the AI kerfuffle that Adobe stirred up earlier this year).
A tool that was once harmless could start using your content or images to train its data models, potentially exposing your organization to risks you hadn’t anticipated. Keeping up with these changes and ensuring compliance is a continuous task that requires vigilance.
Shadow IT and the Risks of Unmanaged AI Tools
Tied into policies and procedures is one of the risks associations have run into many times before – “shadow IT” where employees start using tools without the knowledge or approval of leadership. This can lead to a host of problems, from security vulnerabilities to inconsistencies in how data is managed and used.
To mitigate these risks, associations need to have clear policies around the selection and use of AI tools. This includes ensuring that all AI usage is aligned with the organization’s strategic plan and that there is oversight from both IT and leadership. By doing so, associations can prevent the proliferation of unmanaged AI tools and ensure that their AI initiatives are both secure and effective.
The Importance of Data and Knowledge Governance
With AI tools becoming more sophisticated, the need for strong data and knowledge governance has never been more critical. These tools rely on accurate, up-to-date information to function effectively. If they are fed outdated or conflicting information, the results can be misleading or even harmful (see the previous points).
The boring fact is… associations need to prioritize and operationalize data and knowledge governance to ensure that their AI tools are surfacing the right information.
This means not only keeping your data and institutional knowledge clean and organized but also breaking down silos within your organization. AI cannot be effective if it’s only being used in isolated departments. Instead, its usage should be tied to the organization’s strategic plan, with a clear understanding of why AI is being implemented and what goals it is intended to achieve.
The Bottom Line: AI WILL be part of your operations. So you might as well have a plan.
The discussions at the ASAE annual meeting underscored that while AI offers tremendous opportunities, it also comes with significant responsibilities. For association executives, the challenge lies in navigating these complexities to ensure that AI is used ethically, transparently, and strategically.
As AI continues to evolve, associations must stay ahead of the curve by developing robust policies, investing in data governance, and fostering a culture of transparency. By doing so, they can harness the power of AI to enhance their operations and better serve their members, all while maintaining the trust and integrity that are the hallmarks of successful associations.
Latest Posts
Navigating the AI Revolution: Top Takeaways from ASAE’s 2024 Annual Meeting
There is no better feeling than sitting back at your desk after a good…
Keep ReadingThe Essential Guide To SEO for 2024 and beyond
What You Need To Know Now About SEO We’ve put together this Essential Guide…
Keep ReadingYoko Co Named in 2024 Best Places to Work
We won an award. Over the years, we’ve actually won many awards. For the…
Keep Reading