You must add the following to your build.settings file before you can use the plugin (compatible with Android, iOS and macOS):

plugins =
    ['plugin.googlevision'] = {publisherId = 'com.plantpot'},

To require the plugin:

local googlevision = require "plugin.googlevision"


googlevision:init(key, listener)

Initialises the plugin, must be called before processImage() can be used
key REQUIRED – A string containing the API Key given by the Google Cloud Platform Console:
listener REQUIRED – A callback function which the plugin will send the result of any requests to. See here for a list of possible responses:


local function googlevisionListener(event)
    if event.isError then
        --handle the error
        if event.response then
            local response = event.response
            --The fields in this table will be dependent on which type of image processing has been used.
            print( "error: isError was false but no response was received?!" )

googlevision:init("ENTER_YOUR_API_KEY", googlevisionListener)


Sends an image to Google Vision API.
params REQUIRED – A table containing the fields listed below:


  • imageFileName – The name of the image to use (e.g. “images/myGreatImage.jpg”)
  • imageURL – Ignored if imageFileName is also set. Must be of format: gs://bucket_name/object_name (Google Cloud) or a publicly accessible image HTTP/HTTPS URL (anything else)


  • directory – Only needed when using imageFileName – defaults to system.DocumentsDirectory. Note: On Android, images in system.ResourceDirectory can not be processed. It is recommended that you use either system.DocumentsDirectory or system.TemporaryDirectory.
  • maxResults – The number of results to be returned from the API. Defaults to 10. When using “DOCUMENT_TEXT_DETECTION” or “TEXT_DETECTION”, setting this to 1 should return the entirety of the text as a single object, otherwise it may return a table of paragraphs etc.
  • imageFeatureType – A string specifying what type of detection you would like to perform on the image (see list below). Defaults to “LABEL_DETECTION”
      • “LABEL_DETECTION” – Run label detection – this will return the names of items that the API detects are in the picture and a score (from 0 to 1) to show how confident it is
      • “TEXTDETECTION” – Run OCR.
      • “DOCUMENT_TEXT_DETECTION” – Run dense text document OCR.
      • “FACE_DETECTION” – Run face detection. – will return data such as the position of facial features, likelihood of joy/sadness etc
      • “LANDMARK_DETECTION” – Run landmark detection – provide an image of a landmark and it will identify what that landmark is (e.g. “Statue of Liberty”) – –recommend setting maxResults to > 1 otherwise you may get an empty table result
      • “LOGO_DETECTION” – Run logo detection.
      • “SAFE_SEARCH_DETECTION” – Run computer vision models to compute image safe-search properties.
      • “IMAGE_PROPERTIES” – Compute a set of image properties, such as the image’s dominant colors.
      • “CROP_HINTS” – Run crop hints –the results of this can also be found when using “IMAGE_PROPERTIES” detection, so probably no need to ever use both
      • “WEB_DETECTION” – Run web detection -finds web pages that have fullMatchingImages / visuallySimilarImages / partialMatchingImages of the image provided
      • “TYPE_UNSPECIFIED” – Unspecified feature type. This doesn’t seem to do much…
    • languages – A table of strings with hints about the language that is in the image text. Only used in “DOCUMENT_TEXT_DETECTION” and “TEXT_DETECTION” In most cases, an empty value yields the best results since it enables automatic language detection. For languages based on the Latin alphabet, setting languages is not needed. Must be a language found in this list otherwise it will return an error:
    • latLong – A table of format {minLat, maxLat, minLong, maxLong}. Defines a rect of the minimum and maximum latitude/longitude that the landmark is thought to be in. Used in “LANDMARK_DETECTION” only. Note latitude values must be in the range [-90.0, +90.0], longitude values must be in the range [-180.0, +180.0].
    • aspectRatios – A table of aspect ratios numbers used in “CROP_HINTS”, representing the ratio of the width to the height of the image. For example, if the desired aspect ratio is 4/3, the corresponding float value should be 1.33333. If not specified, the best possible crop is returned. The number of provided aspect ratios is limited to a maximum of 16; any aspect ratios provided after the 16th are ignored.


    --process a local image
    googlevision:processImage({imageFileName = "images/testImage.jpg", directory = system.DocumentsDirectory})
    --process an image hosted online
    googlevision:processImage({imageURL = "", imageFeatureType = "LABEL_DETECTION"  })
    --Run dense text document OCR to extract any text that is in the image, and returns it
    googlevision:processImage({ imageURL = "", imageFeatureType = "DOCUMENT_TEXT_DETECTION", maxResults = 1 })
    Run OCR, with language hint set as "arabic"
    googlevision:processImage({ imageURL = "", imageFeatureType = "TEXT_DETECTION", maxResults = 10, languages = {"ar"} })
    --landmarks detection
    googlevision:processImage({imageURL = "", imageFeatureType = "LANDMARK_DETECTION", latLong = {-45, -35, -80, -70}  })
    --face detection
    googlevision:processImage({imageURL = "", imageFeatureType = "FACE_DETECTION"  })
    --logo detection --works well when there is only one logo in the image, less so if there are multiple logos
    googlevision:processImage({imageURL = "", imageFeatureType = "LOGO_DETECTION"  })
    --safe search detection
    googlevision:processImage({imageURL = "", imageFeatureType = "SAFE_SEARCH_DETECTION"  })
    --image property detection
    googlevision:processImage({imageURL = "", imageFeatureType = "IMAGE_PROPERTIES"  })
    --crop hints detection 
    googlevision:processImage({imageURL = "", imageFeatureType = "CROP_HINTS", aspectRatios = {1.333, 2, 1}  })
    --web detection 
    googlevision:processImage({imageURL = "", imageFeatureType = "WEB_DETECTION"  })
    --unspecified detection - returns an empty table, doesn't seem very useful
    googlevision:processImage({imageURL = "", imageFeatureType = "TYPE_UNSPECIFIED"  })