Your work is outstanding! But I still have some questions. In the kan_conv code of this repository, I found that an extra PReLU layer is added after the convolution and normalization layers for activation. For the original kan layer, there is usually no need to add an extra activation layer. So for kan_conv, is there any basis for adding the PReLU layer or is there any improvement on kan_conv?
Your work is outstanding! But I still have some questions. In the kan_conv code of this repository, I found that an extra PReLU layer is added after the convolution and normalization layers for activation. For the original kan layer, there is usually no need to add an extra activation layer. So for kan_conv, is there any basis for adding the PReLU layer or is there any improvement on kan_conv?