At e.GO Mobile, we started switching to the Flutter framework from Google some time ago.

In doing so, we noticed many advantages, especially in terms of battery consumption and performance.

In this article, I would like to explain how to access a backend in combination with C# and Web API.

For this purpose, there is another sample project on GitHub.

C# backend

As a backend, I have decided to use Microsoft’s Web API this time.

The implementation of such an API is usually done with C#, but depending on the use case, it could also be F# or Visual Basic .NET, for the reason that .NET is not bound to a specific programming language.

To create a backend project, you need a current version of .NET Core.

After installing the SDK, execute following command in the console:

dotnet new webapi --use-controllers -o MyApi

The dotnet CLI creates a project with a sample controller and a Swagger documentation for this purpose.

To start the backend of my GitHub project, switch to the backend directory.

Before you can start with

docker compose up

you need to create/edit .env.local file and add OPENAI_API_KEY entry with a valid OpenAI API key.

Try a POST request like this to check if anything works fine:

POST http://localhost:5080/api/v1/chats
Content-Type: application/json

{
    "image": "https://github.com/mkloubert/gina/raw/main/gina.jpg",
    "prompt": "What do you see?"
}

The ChatController class only contains 1 endpoint to communicate with OpenAI’s chat completion API, by using these model classes.

/// <summary>
/// Controller for chat operations.
/// </summary>
[ApiController]
[Route("api/v1/chats")]
public class ChatController : ControllerBase
{
    private readonly ILogger<ChatController> _logger;

    /// <summary>
    /// Initializes a new instance of this class.
    /// </summary>
    /// <param name="logger">Logger instance from WebAPI framework.</param>
    public ChatController(ILogger<ChatController> logger)
    {
        _logger = logger;
    }

    /// <summary>
    /// Sends a chat request to OpenAI with an image.
    /// </summary>
    /// <param name="body">The request body.</param>
    /// <returns>The response data.</returns>
    [HttpPost]
    public async Task<object> Post(ChatRequestBody body)
    {
        _logger.LogDebug("client submitted '{Prompt}' as prompt", body.Prompt);

        using (var client = new HttpClient())
        {
            // setup HTTP headers
            client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue(
                "Bearer",
                Environment.GetEnvironmentVariable("OPENAI_API_KEY")
            );

            // setup content from the user
            var content = new JsonArray()
            {
                    new JsonObject()
                    {
                        ["type"] = "text",
                        ["text"] = body.Prompt
                    }
            };
            if (!string.IsNullOrWhiteSpace(body.Image))
            {
                content.Add(
                    new JsonObject()
                    {
                        ["type"] = "image_url",
                        ["image_url"] = new JsonObject()
                        {
                            ["url"] = body.Image
                        }
                    }
                );
            }

            // prepare user message
            // with user prompt
            // and image to process
            var userMessage = new JsonObject()
            {
                ["role"] = "user",
                ["content"] = content
            };

            // collect every thing for the OpenAI request
            var openAIBody = new JsonObject
            {
                // `gpt-4-vision-preview` is able to analyze
                ["model"] = "gpt-4-vision-preview",
                ["messages"] = new JsonArray()
                {
                    userMessage
                },
                ["max_tokens"] = 4096,
                ["temperature"] = 0.7
            };

            // prepare the content as JSON string
            var openAIBodyContent = new StringContent(openAIBody.ToString(), Encoding.UTF8, "application/json");

            // now do the request ...
            using (var openAIChatResponse = await client.PostAsync("https://api.openai.com/v1/chat/completions", openAIBodyContent))
            {
                if (!openAIChatResponse.IsSuccessStatusCode)
                {
                    // not what we expected

                    var openAIResponseBodyContent = await openAIChatResponse.Content.ReadAsStringAsync();

                    return StatusCode(500, new Dictionary<string, object>()
                    {
                        ["success"] = false,
                        ["data"] = null!,
                        ["messages"] = new IDictionary<string, object>[]
                        {
                            new Dictionary<string, object>()
                            {
                                ["code"] = (int)openAIChatResponse.StatusCode,
                                ["type"] = "error",
                                ["message"] = openAIResponseBodyContent
                            }
                        }
                    });
                }

                var openAIResponseBody = await openAIChatResponse.Content.ReadFromJsonAsync<OpenAIResponseBody>();

                // from here everything seem to be fine ... return the answer from ChatGPT
                return new Dictionary<string, object>()
                {
                    ["success"] = true,
                    ["data"] = new Dictionary<string, object>()
                    {
                        ["answer"] = openAIResponseBody!.Choices[0].Message.Content
                    },
                    ["messages"] = Array.Empty<object>()
                };
            }
        }
    }
}

Flutter app

To work with Flutter you have to install the SDK.

As we work with Visual Studio Code at e.GO, we also need the corresponding extension.

With this extension, it is possible to create and debug projects easily without having to use the command line.

In addition, I have installed the following packages:

I personally have mainly worked in the iPhone Simulator. After successfully installed you can start the demo app by opening main.dart as active document and click on Run and Debug button in Debug section on the left side in VSCode.

In the _submitPromptAndImage() method of _HomeScreenState class you can find the logic that demonstrates how to interact with the backend:

// ...

  Future<void> _submitPromptAndImage(
    BuildContext context,
    String prompt,
    Uint8List? image,
  ) async {
    setState(() {
      _isSubmitting = true;
    });

    try {
      if (prompt.trim().isEmpty) {
        return; // no prompt
      }
      if (image == null) {
        return; // no image
      }

      final base64Encoder = base64.encoder;

      // create a data URI from JPEG image
      final base64Image = base64Encoder.convert(image);
      final imageDataUri = "data:image/jpeg;base64,$base64Image";

      logger.d("data uri to submit: $imageDataUri");

      // collect the data for the request to the C# backend
      final url = Uri.parse("http://localhost:5080/api/v1/chats");
      final headers = {
        "Content-Type": "application/json",
      };
      final body = {
        'image': imageDataUri,
        "prompt": prompt.trim(),
      };

      // do the POST request
      final response = await http.post(
        url,
        headers: headers,
        body: json.encode(body),
      );

      if (response.statusCode != 200) {
        // not what we expected

        throw HttpException(
          "Unexpected status code: ${response.statusCode}",
          uri: url,
        );
      }

      final responseBody = json.decode(response.body);

      if (context.mounted) {
        // show the answer from LLM

        showDialog<void>(
          context: context,
          barrierDismissible: false,
          builder: (BuildContext context) {
            return AlertDialog(
              title: const Text("Answer"),
              content: SingleChildScrollView(
                child: ListBody(
                  children: <Widget>[
                    Text("${responseBody["data"]["answer"]}"),
                  ],
                ),
              ),
              actions: <Widget>[
                TextButton(
                  child: const Text('OK'),
                  onPressed: () {
                    Navigator.of(context).pop();
                  },
                ),
              ],
            );
          },
        ).catchError((error) {
          logger.e("$error");
        });
      }
    } finally {
      setState(() {
        _isSubmitting = false;
      });
    }
  }

// ...

Conclusion

You can see that with Flutter, fast apps can be built based on Material Design (of course, MUCH more beautiful than the demo app 😉).

On the other hand, .NET in combination with Web API is a very interesting alternative for backend languages like Python, TypeScript, or Java … as seen in the demo project, the framework can now be launched not only on platforms like Windows, but is also at home in the Linux and Mac world.

Have fun while trying it out! 🎉