|
Post by rektdude420 on Apr 3, 2022 5:34:26 GMT
Hello! I wonder if there is any plan to introduce APIs to support deep learning model inference?
If not, is there any possible way to load a model and get output tensor under this environment?
|
|
|
Post by crossa on Apr 13, 2022 3:36:32 GMT
I think it will be a very complicated thing to do. Ankulua needs to be integrated into the python environment, and I can't imagine what should be done to realize that.
Out of curiosity can you be more specific? what is your goal in attaching the model to the Ankulua script? It must be something related to prediction or classification since you are talking about deep learning here.
|
|
|
Post by rektdude420 on Apr 13, 2022 12:31:53 GMT
I think it will be a very complicated thing to do. Ankulua needs to be integrated into the python environment, and I can't imagine what should be done to realize that. Out of curiosity can you be more specific? what is your goal in attaching the model to the Ankulua script? It must be something related to prediction or classification since you are talking about deep learning here. I don't know if python environment will be strictly required. There are some c++ deployment like ONNX, and I'm pretty sure there are some other non-python interfaces for mobile deployment out there, but I never looked into those for mobile. Yeah I'd like some way to do inference with my model, mostly for more robust object detection and classification. If the interface is flexible enough, maybe reinforcement learning can be attempted for advanced automation.
|
|