Machines come to receive written applications and written reports. Are you ready?

Applications of AI

It’s almost impossible to get away from all the articles, blogs and thoughts about ChatGPT. But how will ChatGPT and generative AI revolutionize funding for the international development sector?

Will it be the game-changing moment we’ve been waiting for as we advocate for more participatory, humane, decolonized and trust-based funding? Or will it strengthen and entrench much of the current way of working that desperately needs change?

Machines to write applications and reports are already coming…

A few months ago I blogged about how good ChatGPT is at writing grant applications. Generative AI tools have gotten better since then, and soon we’ll be using them every day once Microsoft Co-pilot and Google’s “Help me write” kick off. It’s easy to see why many time-pressed grant creators are already using ChatGPT and Google’s Bard to create their grant applications and fundraising reports. If you find yourself having to fill out a very tedious grant application or report, why not ask the machine to answer common questions that have been asked by many other funders in the past? Is not it?

Behind the fundraising coin, the potential uses for funders are almost endless. AI may be used for initial application scanning. Streamline due diligence. Find and extract patterns and lessons from a set of reports. Integrate information for governance more consistently and intentionally. AI can host online conversations with applicants, verify eligibility, filter and provide feedback on application quality.

Criticisms of long written applications and reports are well documented. Form has always championed the voice of the white man – the voice of an expert grant maker or an organization large enough to invest in a MEL team where he knows how to “create the best.” They lock funders into linear, outcome-based project funding, maintain a hierarchical structure that funders know best, and continue to “grade student homework.”

Does AI mean we’re finally moving away from our dependence on written tools? I’ve outlined two potential scenarios (there are likely to be many more).

Scenario 1: AI makes current systems even better

This scenario is very likely and very scary. We are going to be in a kind of arms race with more and more AI-generated text. I believe most funders have already received written (or assistance) applications from ChatGPT. Grant-making tools are popping up all over the place (examples here, here, and here). The people who are fundraising for this blog have already said that this will allow them to create even more applications. Using AI tools to create better and faster applications will encourage more people to apply for more funds. As a result, funders will be inundated with applications and will use more AI tools to filter, evaluate, and judge the growing number of applications.

AI makes it easier for us to work and play with our current system, rather than questioning whether it works or not, and entrenches everything that is wrong with the system at the moment. Some funders are horrified by the possibility that people will “game” the system and invest in AI to prevent this, rather than step back and question how the system works. , will be doubled by investing in new systems during the funding application process.

Given the rapid mainstreaming of generative AI, this is by no means a distant future scenario.

Subscribe to our newsletter

Our weekly e-mail newsletter, Network News, is your go-to weekly digest of the latest updates on funding, jobs, resources, news and learning opportunities in the field of international development.

Get network news

Scenario 2: AI rethinks subsidy around human interaction, trust and relationships

The second scenario is less dystopian, but less radical. That’s what AI ends up writing applications and reports for. Those who want funders to decolonize their way of working, shift power, become participatory and more progressive should be very happy.

Rather than bracing about everyone “cheating” on their applications and reports, funders see it as an opportunity to step back and rethink a better funding system. They are abandoning traditional written applications that can be easily gamed by AI tools, relying instead on something that is not yet gameable: human interaction.

To select potential grant recipients and assess their impact, funders visit people associated with the programs and communities in which they work. Funders develop more relational and human methods of evaluation and reporting.

Suddenly, having community members and people with real experience of the problem discussing and making decisions about applications was no longer seen as a risky way to get funding. Bringing the people you are funding together and really listening to them about the impact they are having, what is working and what is not, is a much more credible way to get a ‘report’. How high will it be. Eliminates the need for linear project-based reporting, making it easier for everyone to know the impact of unlimited non-project-based funding. Delegating funds to local and national grant-makers in the South now makes perfect sense. Everything that many claim is “too risky” makes it the perfect mitigation for anyone using ChatGPT to create applications and reports.

The moral panic around AI and ChatGPT is forcing funders, organizations and communities to move away from paper and connect on a human level, which can only be a good thing.

What can you do as a funder?

The important thing is to talk about it. Get it on the staff and board agenda. You don’t have to be an AI technical expert. Try ChatGPT, Microsoft Co-pilot, Bard, and other tools. Perhaps you could use it to fill out your own report or apply for your own grant.

Consider possible scenarios for how AI might affect you and the people you work with. There are many potential ones. The above two are good starting points, but you can come up with others. What about racial bias in AI? Will the fundraising process be open and democratized? Can we decolonize language? How can CSOs leverage AI in their work? What is the role of civil society in regulating AI?

There is no “right” scenario, but ask yourself which scenario you prefer. What can you do now to make it more likely to happen, and what can you do to make the scenario you don’t want less likely? You will get a nice task list. Hurry up though… the machine will come pick up the applications and reports we’ve written sooner than we think.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *