FileMaker ChatGPT Integration

By Cath Kirkland, 19 March 2024

Simple integration using ChatGPT and FileMaker Pro solving two business problems.

Introduction

By now, almost everyone has heard of ChatGPT, marvelling at its ability to write poetry, tell jokes, and entertain. But what about its practical applications in business?

Are there any? We think so, and have started to solve some real world problems with the technology. These simple use cases will focus on how to use ChatGPT to improve or provide substitutes for the text we give it.

TLTR?

We explored three methods for generating text, experimenting with different prompts and requests. We landed on the last method as the most feasible option for our purposes. This method involves using functions included in the request to return predicable JSON formatted results.

Download the Example file here

FileMaker ChatGPT Demo Screenshot

Problem 1: Unprofessional Texts (SMS)

Have you ever received an unprofessional text from a business? Maintaining a consistent company style in custom messages can be challenging. ChatGPT can take the original message and provide alternative options in the writing styles of your choice.

By using a variety of writing style (tones), we can craft appropriate responses to align with the impression the business wants to give.

Here are some common styles or tones you could use to covert the original message. The prompt might include key words like; formal, casual, serious, friendly, optimistic, pessimistic, scientific, concise, descriptive, persuasive, professional, trustworthy or informative. Combinations of them work well too for example:

  • Friendly and professional
  • Authoritative and informative
  • Urgent and persuasive
  • Casual and conversational
  • Professional and trustworthy

SMS Demo Screenshot

Problem 2: Poor SEO

When generating website content it can be common to use a FileMaker database to generate content. In this case we want to create a description for multiple suppliers. We will customise each entry by incorporation details such as their establishment date, the brands they represent and their location. Using these variables we can create a block of text which nicely summarises a unique description for each supplier

However, an issue arises when the content, while unique, follows a repetitive structure, potentially harming SEO performance. It would be better to have the ability to quickly generate high-quality content without having to spend hours brainstorming or writing drafts.

In this case we can use our FileMaker description as the input and generate multiple options in the tones of our choice. Each of which will have unique content and now unique structure too.

MTS Connect Demo Screenshot

How Much Will It Cost?

Charging is approximately 0.03 cent per request (for around 750 words or 1K of data). The pricing model is complex, depending on the number of tokens exchanged and the request processing time.

The model you use also has different pricing structures. Using GPT-3.5 Turbo models are far more cost-effective than GPT-4. The same request as above using GPT-4 would cost 9 cents per request.

One thing we know about AI is that this is a very dynamic landscape. This article was written in March 2024, but to view the most current pricing, please refer to Open AI’s pricing page for further information.

What Is A Token?

Language models perceive text as a series of numbers, known as tokens, rather than in the same way that humans do. There are many advantages to Tokenising the data including reduced request transfer size and search ability. It does make it more difficult to know exactly how big your request is and therefore how much it is going to cost but there are tools for measuring request size in tokens.

To find out more about what a token consist of refer to the help document on Managing tokens

Privacy And Security

OpenAI will store your data for 30 days. Do not send any information you do not want to share. In our cases there is no private information contained in the posts, but we will look at how to use local Large Language Models to reduce privacy concerns in the future.

Creating The Right Prompt.

This is the number one tool for managing your response. In the demo file you will see that by changing the language code in the prompt (one word) we will receive an entirely different message. These models can’t read your mind. If outputs are too long, ask for brief replies.
Set the right tone by suggesting how you want the response to be returned for example, casual but informative. State the audience and the goal and any know constrains.

Models are known to return wild responses so much so that the term Model hallucinations is used. You can reduce this by improving your prompt or by adding a mock conversation sequence to your request. The model will look at the mock conversation as a prototype on how to respond more appropriately

Prompt engineering is bound to become a specific skill. One which is certain to have an impact on the usefulness of the responses you receive. To learn more, read the Open AI guide on prompt engineering.

Which Model To Choose?

Change is happening fast. On the 4th January 2024 GPT3 and GPT 3.5 were depreciated. This article was written in March 2024 so currently we recommend that you use either GPT-4 Turbo Preview or GPT-3.5 Turbo.

GPT-4 Turbo Preview

  • has improved language understanding and generation capabilities compared to GPT-3.5 Turbo.
  • offer better accuracy in generating text and evaluation
  • has a larger context window with a maximum size of 128,000 tokens compared to 4,096 tokens for GPT-3.5 Turbo.

GPT-3.5 Turbo

  • is generally faster than GPT-4 Turbo Preview 
  • costs much less per token

You can use both, as you reference the Model as part of your cURL request. For our purposes we chose GPT-3.5 Turbo generally because the queries are simple and non computational and would be more cost effective using the cheaper model.

As announced in March 2023, OpenAI regularly release new versions of `gpt-4` and `gpt-3.5-turbo`. 

Each model version is dated with a month and date suffix; e.g., `gpt-4-0613`. The undated model name, e.g., `gpt-4`, will typically point to the latest version (e.g. `gpt-4` points to `

After a new version is launched, older versions will typically be deprecated 3 months later.

If you want to use a specific model you would reference it as ["model": "gpt-3.5-turbo-1106"], for our purposes we used the undated model which is simply referred to as ["model": "gpt-3.5-turbo"]

Recent enhancements to models include: improved accuracy, improved function calling support, understanding images, reduce cases of “laziness” where the model doesn’t complete a task.

Setting The Temperature Parameter

Lower values for temperature result in more consistent outputs (e.g. 0.2), while higher values generate more diverse and creative results (e.g. 1.0). Select a temperature value based on the desired trade-off between coherence and creativity for your specific application. The temperature can range is from 0 to 2.

We didn’t use temperature in this example but it is another useful tool in shaping the model responses.

Getting Started

In the demo file we will query the API to receive 3 variations of the input text, giving each of them a slightly different tone. We will look at 3 ways you can change the response received by those queries and which method is the most reliable.

Download the Example file here

Setting Up An OpenAI API Key

If you already have one of these, then you can skip this whole section! If not then read on…

First, create an OpenAI account by signing up at https://openai.com/

Choose the API platform. Click on the API Key link. Accounts need to be verified with a mobile phone number before you can add a new API key.

The introductory offer gives your account $18 free credit and this can be used against any API key you generate. We were allocated $5 credit when setting up a second account which shared the supplied phone number with a previous account. The tip here is that you do get a small amount of credit to get started, and a little goes a long way.

Open AI Platform Demo Screenshot

Create Account Demo Screenshot

API Keys Demo Screenshot

Connecting To The API

Use the following URL:

Below lists the main elements of the cURL request:

  • -X POST
  • -H "Content-Type: application/json"
  • -H "Authorization: Bearer XX-KXXXXXXXXXXX" (where XX-KXXXXXXXXXXX is your API key)

The data portion of the post will specify:

  • The Language model you want to use
  • The Prompt and any input text
  • [Optional] Mock conversations used to pre prime the language model into sculpting your response
  • [Optional] Temperature

For example:

ChatGPT Integration - setting variables

 

Set Variable Options Demo Screenshot 1

 Set Variable Options Demo Screenshot 2

Option 1: Basic

For this example we used the following prompt:

“Provide three variations for this message using en_GB language. option1 Friendly and professional. option2 Authoritative and informative. option3 Casual and conversational. where the message is: Hi Cath your order has arrive. book in a fitting time which suit on 1800 0222 233 or visting www.teamdf.com”

The incorrect spelling and bad grammar is deliberate as we want the API to improve on our submissions :)

The response is returned as JSON (always) but will typically be dumped into one element
choices[0].message.content

The variations are separated by a double line break.

Double Line Break Variable Demo Screenshot

Problems With This?

The API will return inconsistent results, which makes parsing the result difficult. For example the way the variations were presented inside the content could change from call to call.

The following had been observed…

  • Option 1:\nHi Cath, your
  • Option 1:Hi Cath, your
  • Option1:Hi Cath, your
  • Option1: (Friendly and professional)Hi Cath, your
  • Variation 1:Hi Cath, your

This could likely be improved with a more detailed prompt but its still not reliable to parse, which makes this method a bit too basic for practical use. However this would be more practical if you are asking for one answer at a time rather than 3 like this example.

Options 2: JSON

You can specify to have a JSON object returned. This showed more promise.

For this example we used the following prompt:

“Provide json three variations for this message using en_GB language. option1 Friendly and professional. option2 Authoritative and informative. option3 Casual and conversational. where the message is: Hi Cath your stuff has arrive. book in a fitting time which suit on 1800 0222 233 or visting www.teamdf.com”

It was important to including the word JSON at the start of our prompt and to add response_format:{ "type" :"json_object"}, to the data portion of the post.

full result screenshot

This worked better. By returning the result as JSON elements, removed the issue of manually parsing the result of the first method.

JSON Message Options 1 Demo Screenshot

However, the elements returned were not 100% reliable either.  With this method you will always receive valid json however the number of elements and naming convention is not guaranteed. We noticed that if the input text was crafted more like an email, elements such at Attention, Action and Signature may be returned as would Tone on occasion.

JSON Message Options 2 Demo Screenshot

An improvement on the Basic technique but still too unreliable to work with. No doubt this could also be improved by a more specific prompt.

Options 3 Standardised JSON

The last method we settled on was to use a function in our request to specify the JSON we wanted returned.

Functions are a way to force the model to return the response in a standardised format.
They are named and declared as part of your call. Meaning you now have control over the format of the response. You can also set data rules on the elements including length and the accepted values.

cURL request including the function

Curl Request Function Demo Screenshot

Response:
Instead of the response returned in the location choices[0].message.content it will now be returned referencing the function like so: choices[0].message.function_call.arguments

Message Function Call ArgumentsDemo Screenshot

Lessons Learnt

Along with the variation in responses, there were other issues we overcame on our journey.

Language Variations

The responses received were in American english. This wasn’t ideal for our audience and the answer was to be more specific in our prompt. Adding the term en-GB to our prompt translated the response.

When specifying the prompt using ISO 639-1 language code is best That’s en-GB for British (not en-UK), and en-AU, en-NZ, en-CA, en-IE etc.

Using en-GB spelling is generally a safe bet for all non-US countries unless you are very particularly about some of the language differences. If you use Australian English, it starts to overdo it, saying things like “G’day mate!” Unless of course, that’s what you want…

Saying The Right Thing

We had to tweak our prompt a fair bit. It seems like this is really important and there will be a lot of information around tips for crafting the perfect prompts.

Despite carefully crafting our prompts, the language model occasionally overlooked parts of our requests. For example, most of the time our responses were in en-GB, but not every time! Occasionally they would be returned using American spelling. These are the kind of improvements we can expect to see as the Language Models mature and the reason why eye - balling the responses is so important.

Stripping Links

In some cases we wanted to include a specific URL. No matter how we changed the prompt we could not guarantee that the URL would be included in the variations. There is bound to be a way to enforce this but we changed our workflow to not need that and stopped looking to solve this problem. If you know a way then please leave a comment, we would be interested to learn.

Exceeding Rate Limit:

We hit rate limits while using an OpenAI account without a credit card.
"message": "Rate limit reached for gpt-3.5-turbo-0613 in organization org-XXXXXXXXXXX on requests per min (RPM): Limit 3, Used 3, Requested 1. Please try again in 20s. Visit https://platform.openai.com/account/rate-limits to learn more. You can increase your rate limit by adding a payment method to your account at https://platform.openai.com/account/billing.",

Final Comments

This is a simple integration but was very useful from a business standpoint. We would love to know what business problems you are solving with AI.

Hope you found this interesting. Download our file and have a play.

 

 

Something to say? Post a comment...

Comments

  • Henry Collins 11/04/2024 12:12am (9 months ago)

    I have been looking to integrate Filemaker with AI technology so I can create my own GBT UI. I love Filemaker's graphic capabilities for creating polished applications. I will play with the sample file and see what I come up with. Thanks for this wonderful blog.

RSS feed for comments on this page | RSS feed for all comments

Categories(show all)

Subscribe

Tags